Beiträge von bysard

    I've just started experiencing this from 9am this morning. Machine was up for 2 months no problem. I have raid10 with 4x 2TB Seagate Constellation ES3 disks. Top command shows CPU at 195% on rrdcached. I'm using a Intel I5 processor with 8GB ram. I thought it was an overkill using this config but it seems it's not enough. OMV version is 2.1.9.

    Thank you for the additional info, but the point remains as there seems to be a problem with disk wiping function in OMV GUI. It doesn't do the job correctly. I tested GUI wiping on two different machines. Results are the same. External commands are needed for a successful wipe.

    Hi,


    Lab OMV has been successfully setup with a static IP address. Server is rebooted and is accessible via SSH or WWW. The problem is when I try to add DNS server in network section. As soon as I hit apply but, the server is irresponsible to pings (network crash or malformed interfaces file again?). I tried this on two separate machines with the same result. Can someone please check what is going on with networking and OMV 0.5x? It seems very unstable release. Currently I'm running OMV v 0.5.27.


    br,


    bysaRD

    Hi,


    I have just moved 7x 2TB Seagate Constellation disks from a broken Thecus N7700PRO NAS. I have tried to wipe the disks (because it has found an existing raid6 field), but got an error on all disks that a disk cannot be wiped (OMV v 0.525), so I could not create a new raid (I have deleted the old one first). So I went to a OMV CLI and ran parted /dev/sdx mklabel gpt on all disks. I could now wipe the disks in OMV GUI and creta a raid. When the raid creation was done (RAID 5, no spare - took about 20-24h) I tried to create a file system on this raid field. I couldn't because I cannot see any raid field in drop down selection. Then I ran parted /dev/md0 mklabel gpt and I could see the drive in the file system creation drop down menu. Is there a way to do this from GUI?

    Hi all,



    I'm pretty frustrated by now since I lost lots of time trying to do a simple task like installing OMV 0.5x from bootable CD to a fresh machine. The machine configuration is:


    Intel I5 proc
    16 GB ram
    Intel P7p55D motherboard


    The startup always freezes at "ISOLINUX 4.02 debian-201014 ETCD Copyright (C) 1994-2010 H. Peter Anvin et al". I have come across atleast 5 machines so far with the same problem. I can install Debian 6 or 7 on these machine without a problem also.


    Any clues?

    "Every User is also encouraged to help is improve OMV in any way."


    I asked if I can translate OMV to Slovene language and got no reply for more then a month now.

    Hi.


    My suggestion is to stick with OMV 0.4. It has all you need for Proxmox datastore and the bonding is working without a problem. I tried all bonding types with my HP Procurve 1810G switch. I too couldn't come to the conclusion what's going on with OMV 0.5.


    PhantomSens:


    I tried copying old interface file from 0.4 on the same hardware to 0.5. The bonding works but as soon as I (I said said many times now) only change the IP on bond interface the network setup crashes and you see a new interface file with double lines inserted. Please explain how can this be a Linux distro fault?

    I have an average transfer over bonded 4x intel E1000 interfaces on omv 0.4 2gbit/s+, so it's not really an overkill. KVM cluster servers take their tol (zoneminder, zenoss, bacula,...). Back to the bonding subject. I did some trial&eroor research and here are the results.



    My LAB setup is:


    Proxmox KVM server 3.1 as host on 10.4.0.13/24 (single server, no cluster)
    OMV as virtual client on 10.4.0.81/24 over bridged interface on host


    1. I added two more virtual bridges to KVM host and pushed them to OMV client, so I had 3 interfaces in OMV (eth0,eth1,eth2)
    2. Created a simple mode1 (active-backup) bond with eth1 and eth2 --> network crash
    3. edited interfaces file where I deleted double lines and restarted network --> nothing happened, ifconfig shows only lo0 on 127.0.0.1
    4. edited interfaces file again and added lines for eth1 and eth2 (before it was only eth0 setup as manual) with same settings as eth0 (manual)
    5. restarted network --> Device "bonding-masters" does not exist (20 lines of this spam), but the network is accessible again and I can log into OMV over HTTP, active backup setup also works as I was unplugging cables to test this
    6. checked network menu in GUI where I can now see bond0 with 10.4.0.81 IP and also all three eth "physical devices"
    7. changed the IP of bond to 10.4.0.82/24 and applied settings --> network crash
    8. edited interfaces file and saw double lines for bonding again and also no lines defining eth1 and eth2


    This is not yet final conclusion as I'm not done with trial&error tests, but to me it seems as there is some problem with network config in OMV GUI in version 0.5x.

    Well,... this is setup is being used at home. I like overkill and also I use ProXmoX cluster. The whole cluster has the main glusterFS storage stationed at OMV so I really need the link aggregation, because a single gigabit card is a bottleneck in my scenario. I can put some more time into this over the weekend. Will keep you posted.

    "Also found out the switch needs to support NIC Bonding/Link Aggregation as well. Which I suspect yours does as the 0.4 install worked with it."


    This is true for balance-alb and LACP/802.3ad. For balance-RR as in my case you don't need a switch that supports link aggregation. I don't use switch but a Mikrotik Routerboard RB493G which supports all link agg. protocols.


    ############################
    root@mh0-nas0:~# find / -name 'ifenslave'
    /sbin/ifenslave
    /var/lib/dpkg/alternatives/ifenslave
    /etc/alternatives/ifenslave
    /etc/network/if-up.d/ifenslave
    /etc/network/if-post-down.d/ifenslave
    /etc/network/if-pre-up.d/ifenslave


    EDIT:


    I have deleted the duplicated lines in interfaces file and restarted network. Now I get an error trying to restart. Device "bonding_masters" does not exist.

    I will do this when I come home from work today. What i've done so far to test exactly what you asked for:


    1. Installed fresh 0.5 OMV
    2. copied interfaces to interfaces.old
    3. tried to create bond which resulted always in network crash
    4. compared the two files - and are exactly the same!


    Will post the file today.


    EDIT:


    Ok, I tested again by the steps described above. Last step is different then before. The files are not the same.


    Old interfaces file before bond creation:
    #########################################
    # The loopback network interface
    auto lo
    iface lo inet loopback
    iface lo inet6 loopback


    # eth0 network interface
    auto eth0
    allow-hotplug eth0
    iface eth0 inet static
    address 10.0.105.10
    gateway 10.0.105.2
    netmask 255.255.255.0
    dns-nameservers 10.0.105.2
    iface eth0 inet6 manual
    pre-down ip -6 addr flush dev eth0
    #########################################



    After bond creation:
    #########################################
    # The loopback network interface
    auto lo
    iface lo inet loopback
    iface lo inet6 loopback


    # bond0 network interface
    auto bond0
    iface bond0 inet static
    address 10.0.105.10
    gateway 10.0.105.2
    netmask 255.255.255.0
    dns-nameservers 10.0.105.2
    bond-slaves eth0 eth1
    bond-primary eth0
    bond-mode 0
    bond-miimon 100
    bond-downdelay 200
    bond-updelay 200
    dns-nameservers 10.0.105.2
    bond-slaves eth0 eth1
    bond-primary eth0
    bond-mode 0
    bond-miimon 100
    bond-downdelay 200
    bond-updelay 200
    iface bond0 inet6 manual
    pre-down ip -6 addr flush dev bond0
    #########################################


    After the page has become inaccessible, I have rebooted OMV from console. Samba and NFS fail to start (because there is no valid ip assigned).
    Also I get this error when booting:


    Reconfiguring network interfaces.../etc/network/interfaces:19: duplicate option
    ifdown: couldn't read interfaces file "/etc/network/interfaces"
    monit: Cannot connect to the monit daemon. Did you start it with http support?
    monit: Cannot connect to the monit daemon. Did you start it with http support?
    monit: Cannot connect to the monit daemon. Did you start it with http support?
    monit: Cannot connect to the monit daemon. Did you start it with http support?
    monit: Cannot connect to the monit daemon. Did you start it with http support?
    monit: Cannot connect to the monit daemon. Did you start it with http support?


    After some 10 to 15 minutes after last monit error I was able to login. I ran ifconfig at it returns nothing, not even lo0. omv-first-aid command not found so I cannot repair it.
    The file /etc/network/interfaces is there and as user root I can access it. I forgot to check for "interfaces" file permissions before and after bond creation as I have already re-installed OMV. :(


    EDIT2:


    Please disregard my stupidity in typing omv-firstaid. Its works, I've tried again, It repairs the network.