Posts by Ashygan

    Nevermind, what I tried below doesn't work
    I found a way to make it work, though I'm not sure that's the better way to do it :
    Keep the /etc/network/interfaces file as-is.
    Create a /etc/network/interfaces.d/promiscuous (or whatever name suits you) and write inside :


    Code
    iface enp0s3
    up /sbin/ip link set enp0s3 promisc on


    After a reboot, promiscuous mode is on.
    Replace enp0s3 with your interface name.

    Thanks for the answer, but I can't figure how to do it.
    If I edit the /etc/network/interfaces file to add this line (up /sbin/ip link set enp0s3 promisc on), the promiscuous mode is enabled after a reboot.


    As you're not supposed to manually edit this file, I tried to create a /etc/network/interfaces.d/promiscuous file and put the same line in it, but it doesn't enable promiscuous mode after a reboot.


    /etc/network/interfaces :

    /etc/network/interfaces.d/promiscuous :

    Code
    up /sbin/ip link set enp0s3 promisc on

    Hi,


    I'm playing with docker on a fresh OMV 4 installation and I need to set my network card's promiscuous mode for the macvlan system.


    I can check the mode with

    Bash
    ip -d link show interface_name


    I can enable promiscuous mode with


    Bash
    ip link set interface_name promisc on

    However, when I reboot, promiscuous mode is no longer enabled. How can I persist this setting ?





    Thx

    Thank you for your time.


    Just to let you know, this plugin is discontinued and the maintainer recommends using the Domoticz docker image. I'll give it a shot, as I never tried Docker before. I still have the VM solution anyways.


    I don't use much plugins, but I'll keep it on the safe side and wait for the official 3.X release.


    Thanks

    Well, I can't say I saw that one coming :)


    The software is Domoticz, a software used to control your domotic hardware. The newest versions require libboost-thread 1.55 and wheezy only has 1.49. As this package requires other 1.55 libboost packages, a single .deb install is impossible. There is a PPA, but I fear this might create more problems than it would solve. Do you have any thought on this solution ?


    The VM idea is interesting, as I already have virtualbox installed. I'll maybe dig this way.


    I suppose the answer to my next question will be "It'll be there when it'll be there", but do you have an idea of when to expect OMV 3.0 stable with jessie ? One month, six month, a year ?


    Thanks for your answers

    Hi,


    I have a working and stable 1.19 OMV installation on top of a wheezy install. I have a third party software that requires that I upgrade to jessy, so I'd like to upgrade my OMV installation as well.


    How do I proceed ? Should I upgrade debian first or OMV ? Any extra precautions needed ?


    Thx


    Edit : Sorry I couldn't add labels to the thread, the server is returning errors.

    ok, I found the 4:00am occurence : it's defined in /etc/cron.d/cron-apt


    In the crontab file, cron.daily is supposed to be run at 6:25am, but the second mail I get says it runs at 7:35, and I get this mail at the same time as the soon-to-be-fixed empty anacron mail. I found another occurence in the /etc/cron.daily/openmediavault-cron-apt file that runs cron-apt (I ran it manually and got a mail).


    So I really have two launches a day of the cron-apt. According to OMV guidelines, which one should I remove and won't come back in a future upgrade ?

    Yup, one of the mail has been sent today at 7:55, at the same time as the anacron one


    The other has been sent at 4:57.


    Here's the content of my anacrontab :

    Code
    # These replace cron's entries
    1 5 cron.daily run-parts --report /etc/cron.daily
    7 10 cron.weekly run-parts --report /etc/cron.weekly
    @monthly 15 cron.monthly run-parts --report /etc/cron.monthly


    And the content of my /etc/crontab :

    Code
    17 * * * * root cd / && run-parts --report /etc/cron.hourly
    25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
    47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
    52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )


    I lack knowledge about anacron, but should I disable the cron hourly and cron weekly in my crontab file ? Aren't the entries redundant ?


    Thx

    Hi, I get 4 mails a day from my up-to-date OMV installation :

    • 1 almost always empty anacron (I saw it will be fixed in the next release 1.0.30)
    • 2 identical mails about cron-apt telling me what packages are updatable (one before 5am and the second before 8am)
    • 1 sparesmissing event on a two disks raid 1 array (fixed by setting spare to 0 in /etc/mdadm/mdadm.conf, as seen in the forum)

    How can I get only one cron-apt mail per day ? By the way, the "software updates" event in the notification page isn't check, so I suppose I shouldn't get any.


    Thx

    Nevermind, I figured it out. You have to wipe the disk (sdc in my case) in the physical disks page, and then you can recover your array in the raid management page.


    Unfortunately, I think I found a bug : the state is "clean, degraded, recovering (unknown)", but I suppose there should be a progress indicated in the unknown part. A "cat /proc/mdstat" shows the progress :

    Quote

    Personalities : [raid1]
    md0 : active raid1 sdc[2] sdd[0]
    2930135360 blocks super 1.2 [2/1] [U_]
    [>....................] recovery = 1.1% (33131904/2930135360) finish=266.5min speed=181134K/sec


    unused devices: <none>

    Hi,


    I added today two drives as a RAID 1 array in my OMV server. Unfortunately, when I moved my computer, one of those drives was unplugged, so the array was degraded when I booted again. I saw the missing drive, plugged it in again and booted again. The missing drive is seen in physical drives (/dev/sdc, where the other disk is /dev/sdd), but when I open the RAID management page, the drives column only lists /dev/sdd, while the state column switches between "active, degraded" and "clean, degraded".
    There isn't any drive to be selected when I want to use the recover option (and neither in the create option btw). Rebooting doesn't change anything


    A cat /proc/mdstat shows this :


    Code
    Personalities : [raid1]
    md0 : active raid1 sdd[0]
    2930135360 blocks super 1.2 [2/1] [U_]
    unused devices: <none>


    Should I wipe the sdc disk and re-add it ? It's the first time I use an mdadm array. I used a Nvidia fakeraid array before and removing a disk to add it again was the way to go.


    Thx

    Thanks for the answers. Solo0815 was right for the colors.
    For the home/end keys, the problem was that my root user had a /root/.inputrc file without the required lines :
    "\e[1~": beginning-of-line
    "\e[4~": end-of-line


    My other wheezy installation didn't have such a file. I removed it and everything works as normal. I just wonder how the hell did I get this file on this installation.

    Hi,


    With a recent OMV 1.0 installed on top of a fresh wheezy distrib, I connect to this server from a Windows 7 with putty through SSH. Whenever I hit the home or end key to go to the beginning/end of the command line, I get a tilda instead. I also noticed I don't have any colors in the shell (like I should have with a ls command).
    I saw some solutions on the net, but I thought that maybe altering the xresource or screenrc might have an impact on OMV interacting with the shell.
    I think this is specific to OMV, because on other Wheezy installations, I don't have this problem.


    How can I get back my home and end keys ?

    Well, it's less that I fear than that I find it unconvenient : I host a few web apps on this server and I like to have a samba share to access those files for deployment, maintenance, etc. Either I host everything on the same partition and I lose the ability to access it via a samba drive (my main computer is running Windows, so samba is more convenient than scp), or I have to partition my small SSD and risk of having a hard time because of the space not shared between partitions.
    I have some trouble understanding why this limitation exists on OMV, as I read in the forum that the volume listing function uses blkid, which shows my system partition.


    Anyway, thanks for the help, I'm going to think of how I'm going around this problem.

    Thank you for the reply. That was what I feared : While it is possible to share a partition on the system disk, it's not possible to share a folder on the system partition. As I'd like not to partition my OS disk, what happens if I manually add a samba share in the smb.conf ? Will it be overwritten by OMV configuration next time I touch the OMV SMB configuration ?

    Hi,


    I just installed OMV kralizec on top of a fresh wheezy install. Everything runs smooth, except when I try to add a new shared folder : The volume list is empty. From what I gathered on the forum, that might be because I only have one drive yet, with a single ext4 partition created by debian, containing the OS. Is it right ? If so, is it really impossible to add a shared folder on a system disk ?


    Thx