Posts by utamav

    The wireguard packages are in buster-backports now.


    sudo apt-get -t buster-backports install wireguard wireguard-dkms wireguard-tools

    I installed wireguard via backports. During installation, I didn't see any errors but when I run modprobe, I get the following:


    Code
    sudo modprobe wireguard
    modprobe: FATAL: Module wireguard not found in directory /lib/modules/4.19.0-0.bpo.6-amd64

    Although,

    Code
    sudo dpkg -l | grep wireguard
    ii wireguard 1.0.20200206-2~bpo10+1 all fast, modern, secure kernel VPN tunnel (metapackage)
    ii wireguard-dkms 0.0.20200215-2~bpo10+1 all fast, modern, secure kernel VPN tunnel (DKMS version)
    ii wireguard-tools 1.0.20200206-2~bpo10+1 amd64 fast, modern, secure kernel VPN tunnel (userland utilities)

    So is wireguard installed or not?

    Code
    2020-03-01 21:07:00
    Mar 1 21:07:00 SERVER-HOSTNAME nmbd[893]: [2020/03/01 21:07:00.695366, 0] ../source3/nmbd/nmbd_become_lmb.c:533(become_local_master_browser)
    2020-03-01 21:07:00
    Mar 1 21:07:00 SERVER-HOSTNAME nmbd[893]: become_local_master_browser: Error - cannot find server SERVER-HOSTNAME in workgroup WORKGROUP on subnet 172.17.0.1

    I keep getting the above error on my server running OMV 4.I don't have any issues with shared folder but I am trying to understand what might be causing the above error. Anyone has hints?

    I performed a fresh install of OMV4 on a dell T30 server. Created a new user and pasted the RFC 4716 key but I am unable to login using the key. Password works.


    Searching on forums, I found this thread.


    That thread seems to talk about image issues with RPi but it seems it is effecting other build too.


    My output:


    Code
    getfacl /
    getfacl: Removing leading '/' from absolute path names
    # file: .
    # owner: root
    # group: root
    user::rwx
    group::rwx
    other::r-x

    I have OMV running on a USB flashdrive. Currently there are some issues with it and I want to do a fresh install on an SSD (in a external case).


    I am looking for ideas on how I can migrate my existing configuration to the fresh install. I don't want to use cloning because I think there is something wrong with the filesystem (reason for my fresh install). Is there a way to export the config?

    I have set up an Rsync job to backup my Nextcloud data folder (NC running on a Raspberry Pi on my local network) to my OMV server. So far I did the following:



    Did you ever manage to get this to work?


    I tried creating an SSH certificate and copy it to the remote server with build in tool. It didn't work, I am guessing, because of the issue you mentioned. I then tried copying the certificate to authorized_keys of the remote server. Even when I try to SSH to that server it doesn't work let alone rsync. So I created an SSH key using the old method with ssh-keygen and copied it to the remote server. Validated that I could login without password. Updated everything in Rsync page but I still can't get Rsync to work.




    Done ...
    I tested a manual Rsync command and it seems to run just fine. So I am not sure what's wrong with the inbuilt tool.

    Code
    rsync -avzh user@1.0.3.1:/mnt/volume/ data/ServerBackups/

    I was having a problem with my server stalling for which I created this thread. I decided it must have been due to a failing USB drive (which was used as the root drive). I made a clone of it on another drive. Everything was working fine for about 2 weeks when suddenly the server stalled again. This time I can't find any logs in the syslog or messages. Any suggestions where I could look for this problem?

    Thanks for the input. fsck didn't show any errors but this has happened to me twice recently after running fine for over a year. So I went ahead and created a Clonezilla backup.


    One question: My Clonezilla backup for a 32GB drive was only 2GB. Is that normal? I used clone to local drive.

    I actually found this in the syslog:

    This is repeated a lot of times before I restarted the server and are gone since the restart. sde is my root drive which is a USB drive.

    I have been running OMV for over a year now but recently it would become unresponsive after a few weeks. I can ping the server but I can't ssh or access the GUI. I am running it via a USB drive and it might be the drive is going bad. What logs can I look at to confirm it? The issue is resolved after a reboot.

    I see residual folders after unmounting drives in /srv. Is there a way to clean it up? Can they be deleted without impact?


    Code
    drwxr-xr-x 4 root root 4096 Jul 23 17:00 dev-disk-by-label-DVR1TB
    drwxr-xr-x 3 root root 4096 Jul 27 15:04 dev-disk-by-label-PRIMARY

    When trying to run rsync manually from the GUI, I am getting the above error.

    It turns out, I am stuck. My older 4TB (one that does not cause boot conflict) came with advanced partition which supports MBR for over 2TB. The newer 4TB (one that causes boot conflict), can't support MBR over 2TB. My only other option is to see if I can make the OMV boot drive GPT as well. As I read somewhere it requires installing minimal debian server with UEFI turned on in BIOS and then to install OMV.


    All in all, I am not sure if this is going to work with my current setup. I might have to just shuck out the drive from the external enclosure and put in in the chassis. Even with that there is a chance that the shucked out drive might need some tweaking for it to work with the normal power connector. ;(

    I had to reboot it after unmounting.


    After wiping it and formatting it with GUI, I have the same problem.

    Code
    Disk /dev/sde: 3.7 TiB, 4000786153472 bytes, 7814035456 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: gpt
    Disk identifier:
    Device Start End Sectors Size Type
    /dev/sde1 2048 7814035422 7814033375 3.7T Linux filesystem

    Somehow I end up with gpt again.

    'I think' this has a protective mbr....hence the problem.


    You are right.


    Partition table scan:
    MBR: protective
    BSD: not present
    APM: not present
    GPT: present


    But OMV, doesn't let me wipe the disk:

    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; blockdev --rereadpt '/dev/sdd' 2>&1' with exit code '1': blockdev: ioctl error on BLKRRPART: Device or resource busy