Posts by tschensie

    macom do you use this app? Is it possible to dig down and select just a subfolder in a SMB share?


    Update: Reading the documentation. I see that I can. Now to figure how to write the file path, and if it will take a local ip.


    Update2: Ha-wee! It works.

    Can you tell me how you did it ?
    I'm trying to get a shared OMV folder into Nextcloud, but I can't connect it with the external storage app.

    It seems I can't reach the IP of my server in the docker-container

    Backports are enabled for the kernel by default on OMV. omv-extras provides a switch to turn it on and off as well.


    Uh, no. There is no 6.x kernel. OMV 5.x uses the 5.9 kernel when backports are enabled. Otherwise, it will use the standard Debian 10 4.19 kernel.

    Hmmm, but I did not activate it. I did a fresh install and backports was enabled by default. It started with 5.4 and now it's 5.9.

    Don't have a 4.x installed on my system


    I found some files in /etc/systemd/network:


    openmediavault-bond0.netdev openmediavault-enp4s0f0.network openmediavault-enp5s0f0.network openmediavault-enp8s0.network

    openmediavault-bond0.network openmediavault-enp4s0f1.network openmediavault-enp5s0f1.network

    Is this why bond0 is still active ?

    the openmediavault-bond0.network shows the correct networkconfig.


    Can I just delete the files and create a new bond after restarting the server ?

    Or could bond0 be linked somewhere else also?

    Hi.

    I have a prblem with my nic's in my server.

    It has an onboard nic from realtec and a 4-port networkadapter from intel.

    onboard nic ist used as admin-port (enp8s0), 4-port adapter is bonded as 802.3ad for data-transfer (bond0).

    In Webgui Network / interfaces the bond0 isn't shown anymore, but it is active.

    bond0 has a static ip which is still reachable in the network, but the hostname (nas) ist now bound to the ip of admin port (was on bond0 before).

    When I try to recreate the bond via webgui (all 4 nics are available for this) it creates bond1 an gives an error on the console "inavlid link....on bond0"

    So I deleted bond1 again and restartet the the server.

    In webgui I can't see any bond, only the admin nic, but the server is still reachable on both IP's.

    "IP link show" shows the following:


    root@NAS:~# ip link show

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    2: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000

    link/ether bc:5f:f4:97:01:fc brd ff:ff:ff:ff:ff:ff

    3: enp4s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT group default qlen 1000

    link/ether 0a:b5:ae:4c:49:1e brd ff:ff:ff:ff:ff:ff

    4: enp4s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT group default qlen 1000

    link/ether 0a:b5:ae:4c:49:1e brd ff:ff:ff:ff:ff:ff

    5: enp5s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT group default qlen 1000

    link/ether 0a:b5:ae:4c:49:1e brd ff:ff:ff:ff:ff:ff

    6: enp5s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT group default qlen 1000

    link/ether 0a:b5:ae:4c:49:1e brd ff:ff:ff:ff:ff:ff

    7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000

    link/ether 0a:b5:ae:4c:49:1e brd ff:ff:ff:ff:ff:ff

    8: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default

    link/ether 02:42:72:7a:cb:e0 brd ff:ff:ff:ff:ff:ff

    10: veth91ec054@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default

    link/ether ae:f8:a7:68:54:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0

    12: veth9769d6c@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default

    link/ether 0a:dc:a2:a4:8c:5d brd ff:ff:ff:ff:ff:ff link-netnsid 1

    14: veth47902f9@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default

    link/ether 52:60:f0:b4:c4:b5 brd ff:ff:ff:ff:ff:ff link-netnsid 2

    root@NAS:~#

    As you can see there's still a bond 0

    /etc/hosts has an entry "192.168.10.2 NAS.fritz.box NAS" (IP of the admin port)

    Where ist the bond0 configured and how can I delete it to create it new ? And how can I bind the hostname to the Data-IP again ?

    Found something, but don't know if it could be a reason:


    The OMV-server has 5 netword cards: onboard realtec chip an a 4-port intel networkadapter.

    The 4 ports are bonded to bond0, ip-adress 192.168.10.1, the realtec port has the ip 192.168.10.2.

    the one port is for administration (because in OMV 2 the bonded ports sometimes disappeared and my server wasn't reachable until I did omv-firstaid on the console).

    When setup the server with omv 5 I created the bond via webgui. Just about two weeks ago I saw that the bond wasn't shown anymore in the networkadapters, so I created a new bond, but it was shown as bond1.

    Today I connected a Monitor to the server and saw some lines "bond0: inavlid new link on slave....", also for bond1.

    "ip link show" showed me the 4 intel ports as master for bond0 and bond1, also bond0 and bond1 with the same ip-adress.

    I deleted the bond1 via GUI and rebooted the server.

    Now something strange: In Networksettings there's only 1 networkadapter (the realtec), "ip link show" shows also the bond0 and all 4 intel ports as master for bond0, but the console on the monitor shows me "to manage the system visit the omv web control panel: bond1: 192.168.10.1".

    I think my network config is messed up.

    "cat /proc/net/dev" also shows a lot of dropped received packages

    OMV-firstaid doesn't show the the bond adapter, just docker0, the 5 ports and 3 virtual ports for the docker container


    how can I clean up this ?

    Hi.

    Since about 2 Weeks I have a problem with my smb shares:


    When writing files from my computer (win10) to my smb shares it starts with ~80MB/s, after about 10-15 seconds the speed drops and swings between 3-12 MB/s and remains till the end.

    Reading files from the smb-share is constant ~100MB/s.


    Any suggestion where to look ?

    Hi.

    Is there a plugin or docker-container to have a webbased file access like the file station on a qnap nas ?

    I want to give some friends access to some of my files on my omv-server.

    So I want to create some useraccounts and give them access to a few shared folders.

    The best would be they can access my omv via my ip4-adress in a webbrowser, see the files and are able to download or upload

    Hi.

    I use a hardware raid, added a new hd and have grown it.

    OMV recognizes the the capacity change:


    NAS kernel: [64759.934844] sdd: detected capacity change from 11999967707136 to 15999956942848


    But when I try to resize the filesystem nothing happens an I get this entry in my systemlog:


    NAS CRON[27854]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1)


    So do I have to boot into GParted and resize it or is there any way to do this in OMV ?

    Hi.


    Sometimes I have a very slow access from my windows client to my smb-shares.
    Just to show the content of a directory with 1 file takes 1 minute or more. Also opening files sometimes take very long.
    When I look to Display of my server I see the following Message on the Login-Screen:


    "Info: task smbd:9162 blocked for more than 120 seconds
    Not tainted 5.4.0-0.bpo.2-amd64 #1 Debian 5.4.8-1~bpo10+1"


    What could be the problem ?


    What confuses me is the 5.4.0-0.bpo.2-amd64 message because grub sais I'm booting 5.4.0-0.bpo.3-amd64 kernel.


    But also the omv-systeminformation sais it's the 5.4.0-0.bpo.2-amd64 kernel ?(