Beiträge von digitalbots

    To be 100% clear.


    I have a raid controller that is already running clean on my hypervisor layer (proxmox) but i passed the raid controller over to OMV and after doing a format this is what was on the screen.



    the best i can tell is that megaraid (Dell Perc H310) is having issues with the OMV OS


    so my question is there something I need to do to get my OS to better work with the PCI passthrough?


    ###UPDATE###

    so once i start the rsync it craps the bed screenshot below and the machine reboots


    The omv-update should fix it.

    so a bit of good news bad.... omv-update did fix omv-firstaid BUT it went from the GUI giving errors to not loading anymore after I cleared the cache from omv-firstaid.


    Also a quick check on services and EVERYTHING is down SSH/SAMBA/RSYNC


    i think something in that update from earlier did a number on this box.

    sorry for typeing it wrong but my SSH services don't work since this update so I have to retype what is on the screen vs copy paste.


    BUT


    I am typing omv-firstaid


    So if you saw my other thread of yesterday you know my box has been acting funny (I pulled the logs and this seems to have happened after the last update I did yesterday). Anyway!


    We i log into the gui all I get is "an error has occurred" on the screen and no data shows up. SO i thought i should run OMV-Firstaid but when i do i get a error on the screen


    modulenotfounderror: no module named 'openmediavault.subprocess'


    this is a VM so i think this thing is toast but I have ALOT OF WORK ON THIS THING. So i would rather fix than rebuild anyone know how to fix this?

    Looks like a network error or a crashed rsync / ssh process on the other side.

    Can you run this rsync job again?

    funny thing is i am webGUI into the Host box while doing the rsync in the gui didn't die. Also when rsync dies I can just restart job from the client no issues.

    It just died again this time I kept SSH open on both boxes via putty on my win10 machine. I saw the GUI on the client side give this error again


    rsync: read error: Connection reset by peer (104)

    rsync error: error in socket IO (code 10) at io.c(794) [receiver=3.1.2]

    rsync: connection unexpectedly closed (327642 bytes received so far) [generator]

    rsync error: error in rsync protocol data stream (code 12) at io.c(235) [generator=3.1.2]


    So instead of waiting i just clicked on the run button again. now i get this error.


    rsync: failed to connect to 192.168.0.113 (192.168.0.113): Connection refused (111)

    rsync error: error in socket IO (code 10) at clientserver.c(125) [Receiver=3.1.2]


    BUT both webGUI's are up, and SSH is still open with HTOP running on the both showing now service spikes.


    I also ran this right after it died

    root@omv5:~# service rsync status

    ● rsync.service - fast remote file copy program daemon

    Loaded: loaded (/lib/systemd/system/rsync.service; enabled; vendor preset: enabled)

    Active: active (running) since Fri 2022-01-07 15:38:01 EST; 11min ago

    Docs: man:rsync(1)

    man:rsyncd.conf(5)

    Main PID: 24660 (rsync)

    Tasks: 1 (limit: 4915)

    Memory: 3.9G

    CGroup: /system.slice/rsync.service

    └─24660 /usr/bin/rsync --daemon --no-detach


    Jan 07 15:38:01 omv5 systemd[1]: Started fast remote file copy program daemon.

    Jan 07 15:38:01 omv5 rsyncd[24660]: rsyncd version 3.1.3 starting, listening on port 873

    Jan 07 15:38:19 omv5 rsyncd[24759]: name lookup failed for 192.168.0.237: Name or service not known

    Jan 07 15:38:19 omv5 rsyncd[24759]: connect from UNKNOWN (192.168.0.237)

    Jan 07 15:38:19 omv5 rsyncd[24759]: rsync on omvlarge/TV Shows from UNKNOWN (192.168.0.237)

    Jan 07 15:38:19 omv5 rsyncd[24759]: building file list

    Jan 07 15:46:02 omv5 rsyncd[24759]: rsync: read error: Connection reset by peer (104)

    Jan 07 15:46:02 omv5 rsyncd[24759]: rsync error: error in socket IO (code 10) at io.c(794) [sender=3.1.3]

    root@omv5:~#



    So i cant edit the /etc/rsyncd.conf file to add the disable reverse lookup flag.



    I also went into the rsync area of the gui and added the ip address under allowed hosts and its still kicking me out.

    this is the entry it put into my rsyncd.conf file


    [omvlarge]

    path = /srv/./dev-disk-by-uuid-1041347b-101a-4cd4-88e7-7492c379a4e6

    uid = nobody

    gid = users

    list = yes

    read only = no

    write only = no

    use chroot = yes

    hosts allow = 192.168.0.237

    lock file = /run/lock/rsyncd-omvlarge




    my OMV5.x box might be dying or something

    I am not sure what is going on with my rsync as of late but my i have 2 OMV servers and run a rsync between the two of them. For a good while now the rsync job was working with no issue.


    Then today i pulled the logs and found this


    rsync: read error: Connection reset by peer (104)

    rsync error: error in socket IO (code 10) at io.c(794) [receiver=3.1.2]

    rsync: connection unexpectedly closed (327642 bytes received so far) [generator]

    rsync error: error in rsync protocol data stream (code 12) at io.c(235) [generator=3.1.2]



    So the host is a 5.x and the client (receiver of the files) 4.x OMV and these logs are from the 4.x server.


    I run plex and other services on the 5.x and I dont see the network dropping out on other services so I am unsure what is causing this.

    This is super odd to me. I have OMV on PVE. Added a 1.2tb HD to the OMV. Did the wipe, mounted it, and now when i try to save the config I get this error. I am not sure what is going on here.


    Zitat


    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color collectd 2>&1' with exit code '1': debian: ---------- ID: configure_collectd_conf_cpu_plugin Function: file.managed Name: /etc/collectd/collectd.conf.d/cpu.conf Result: True Comment: File /etc/collectd/collectd.conf.d/cpu.conf is in the correct state Started: 11:30:31.379192 Duration: 109.131 ms Changes: ---------- ID: configure_collectd_conf_df_plugin Function: file.managed Name: /etc/collectd/collectd.conf.d/df.conf Result: True Comment: File /etc/collectd/collectd.conf.d/df.conf updated Started: 11:30:31.488912 Duration: 39.15 ms Changes: ---------- diff: --- +++ @@ -6,5 +6,6 @@ MountPoint "/srv/dev-disk-by-label-filecloud" MountPoint "/srv/dev-disk-by-label-seafile" MountPoint "/srv/dev-disk-by-id-md-name-raider-raider" + MountPoint "/srv/dev-disk-by-id-scsi-0QEMU_QEMU_HARDDISK_drive-scsi4-part1" IgnoreSelected false </Plugin> ---------- ID: configure_collectd_conf_disk_plugin Function: file.managed Name: /etc/collectd/collectd.conf.d/disk.conf Result: True Comment: File /etc/collectd/collectd.conf.d/disk.conf updated Started: 11:30:31.528518 Duration: 27.668 ms Changes: ---------- diff: --- +++ @@ -5,6 +5,7 @@ Disk "sdc" Disk "sda" Disk "md127" + Disk "sdm" Disk "sdb" IgnoreSelected false </Plugin> ---------- ID: configure_collectd_conf_interface_plugin Function: file.managed Name: /etc/collectd/collectd.conf.d/interface.conf Result: True Comment: File /etc/collectd/collectd.conf.d/interface.conf is in the correct state Started: 11:30:31.556654 Duration: 23.279 ms Changes: ---------- ID: configure_collectd_conf_load_plugin Function: file.managed Name: /etc/collectd/collectd.conf.d/load.conf Result: True Comment: File /etc/collectd/collectd.conf.d/load.conf is in the correct state Started: 11:30:31.580346 Duration: 5.416 ms Changes: ---------- ID: configure_collectd_conf_memory_plugin Function: file.managed Name: /etc/collectd/collectd.conf.d/memory.conf Result: True Comment: File /etc/collectd/collectd.conf.d/memory.conf is in the correct state Started: 11:30:31.586164 Duration: 5.339 ms Changes: ---------- ID: configure_collectd_conf_rrdcached_plugin Function: file.managed Name: /etc/collectd/collectd.conf.d/rrdcached.conf Result: True Comment: File /etc/collectd/collectd.conf.d/rrdcached.conf is in the correct state Started: 11:30:31.591905 Duration: 5.588 ms Changes: ---------- ID: configure_collectd_conf_syslog_plugin Function: file.managed Name: /etc/collectd/collectd.conf.d/syslog.conf Result: True Comment: File /etc/collectd/collectd.conf.d/syslog.conf is in the correct state Started: 11:30:31.597911 Duration: 5.442 ms Changes: ---------- ID: configure_collectd_conf_unixsock_plugin Function: file.managed Name: /etc/collectd/collectd.conf.d/unixsock.conf Result: True Comment: File /etc/collectd/collectd.conf.d/unixsock.conf is in the correct state Started: 11:30:31.603760 Duration: 6.424 ms Changes: ---------- ID: configure_collectd_conf_uptime_plugin Function: file.managed Name: /etc/collectd/collectd.conf.d/uptime.conf Result: True Comment: File /etc/collectd/collectd.conf.d/uptime.conf is in the correct state Started: 11:30:31.610589 Duration: 6.332 ms Changes: ---------- ID: prereq_collectd_service_monit Function: salt.state Result: True Comment: States ran successfully. Updating debian. Started: 11:30:31.622496 Duration: 1131.58 ms Changes: debian: ---------- ID: configure_monit_collectd_service Function: file.managed Name: /etc/monit/conf.d/openmediavault-collectd.conf Result: True Comment: File /etc/monit/conf.d/openmediavault-collectd.conf is in the correct state Started: 11:30:32.464983 Duration: 50.017 ms Changes: ---------- ID: configure_monit_filesystem_service Function: file.managed Name: /etc/monit/conf.d/openmediavault-filesystem.conf Result: True Comment: File /etc/monit/conf.d/openmediavault-filesystem.conf is in the correct state Started: 11:30:32.515196 Duration: 18.559 ms Changes: ---------- ID: configure_monit_nginx_service Function: file.managed Name: /etc/monit/conf.d/openmediavault-nginx.conf Result: True Comment: File /etc/monit/conf.d/openmediavault-nginx.conf is in the correct state Started: 11:30:32.533947 Duration: 12.697 ms Changes: ---------- ID: configure_monit_omv-engined_service Function: file.managed Name: /etc/monit/conf.d/openmediavault-engined.conf Result: True Comment: File /etc/monit/conf.d/openmediavault-engined.conf is in the correct state Started: 11:30:32.546814 Duration: 12.221 ms Changes: ---------- ID: configure_monit_php-fpm_service Function: file.managed Name: /etc/monit/conf.d/openmediavault-phpfpm.conf Result: True Comment: File /etc/monit/conf.d/openmediavault-phpfpm.conf is in the correct state Started: 11:30:32.559201 Duration: 12.067 ms Changes: ---------- ID: configure_monit_proftpd_service Function: file.managed Name: /etc/monit/conf.d/openmediavault-proftpd.conf Result: True Comment: File /etc/monit/conf.d/openmediavault-proftpd.conf is in the correct state Started: 11:30:32.571438 Duration: 13.1 ms Changes: ---------- ID: configure_monit_rrdcached_service Function: file.managed Name: /etc/monit/conf.d/openmediavault-rrdcached.conf Result: True Comment: File /etc/monit/conf.d/openmediavault-rrdcached.conf is in the correct state Started: 11:30:32.584738 Duration: 11.742 ms Changes: ---------- ID: configure_monit_system_service Function: file.managed Name: /etc/monit/conf.d/openmediavault-system.conf Result: True Comment: File /etc/monit/conf.d/openmediavault-system.conf is in the correct state Started: 11:30:32.596676 Duration: 24.391 ms Changes: ---------- ID: configure_default_monit Function: file.managed Name: /etc/default/monit Result: True Comment: File /etc/default/monit is in the correct state Started: 11:30:32.621250 Duration: 3.438 ms Changes: ---------- ID: configure_monit_monitrc Function: file.managed Name: /etc/monit/monitrc Result: True Comment: File /etc/monit/monitrc is in the correct state Started: 11:30:32.624849 Duration: 24.058 ms Changes: ---------- ID: test_monit_config Function: cmd.run Name: monit -t Result: True Comment: Command "monit -t" run Started: 11:30:32.650059 Duration: 15.192 ms Changes: ---------- pid: 22628 retcode: 0 stderr: stdout: Control file syntax OK ---------- ID: reload_monit_service Function: service.running Name: monit Result: True Comment: The service monit is already running Started: 11:30:32.697958 Duration: 49.778 ms Changes: Summary for debian ------------- Succeeded: 12 (changed=1) Failed: 0 ------------- Total states run: 12 Total run time: 247.260 ms ---------- ID: configure_collectd_conf Function: file.managed Name: /etc/collectd/collectd.conf Result: True Comment: File /etc/collectd/collectd.conf is in the correct state Started: 11:30:32.754538 Duration: 57.495 ms Changes: ---------- ID: start_collectd_service Function: service.running Name: collectd Result: True Comment: Service restarted Started: 11:30:32.881537 Duration: 3397.618 ms Changes: ---------- collectd: True ---------- ID: monitor_collectd_service Function: module.run Name: monit.monitor Result: False Comment: No function provided. Started: 11:30:36.285815 Duration: 2.334 ms Changes: ---------- ID: install_mkrrdgraph_cron_job Function: file.managed Name: /etc/cron.d/openmediavault-mkrrdgraph Result: True Comment: File /etc/cron.d/openmediavault-mkrrdgraph is in the correct state Started: 11:30:36.288521 Duration: 10.85 ms Changes: ---------- ID: generate_rrd_graphs Function: cmd.run Name: /usr/sbin/omv-mkrrdgraph Result: True Comment: Command "/usr/sbin/omv-mkrrdgraph" run Started: 11:30:36.301375 Duration: 19032.382 ms Changes: ---------- pid: 22879 retcode: 0 stderr: stdout: Summary for debian ------------- Succeeded: 15 (changed=5) Failed: 1 ------------- Total states run: 16 Total run time: 23.866 s

    Proxmox might be what you are looking for. There is a new version I haven't tried yet.


    OMV runs very well as a vm. Have never used cluster so test.

    I was thinking that was the case. I have Proxmox v6 and thought rolling reboots suck and if the files were hosted in a cluster I could have a docker server pointing to the cluster so a rolling reboot wouldn't kill my network services but yes the VM is very stable and quick to load. :D

    I am sure i am way off in asking this. Is there a way to Cluster OMV so that I could power down one box by my shares still be up?


    this is just a nice to have the reboot time on OMV isn't horrible but the thought crossed my mind if it was possible.

    I have been struggling with this for weeks now.


    Problem: every time I reboot my OMV server plex doesn't work. All Network Shares and NAS shares are unavailable at boot so the service would have to be restarted from docker after a reboot to fix the issue.


    Solution:

    First I delayed the docker start-up by 30 seconds.


    I added the following line to /etc/systemd/system/multi-user.target.wants/docker.service


    ExecStartPre=/bin/sleep 30


    It should look something like this.


    [Service]

    Type=notify

    # the default is not to use systemd for cgroups because the delegate issues still

    # exists and systemd currently does not support the cgroup feature set required

    # for containers run by docker

    ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

    ExecReload=/bin/kill -s HUP $MAINPID

    TimeoutSec=0

    RestartSec=2

    Restart=always

    ExecStartPre=/bin/sleep 30


    This last step isn't needed but if you have any MNT's that take a while to load and 30 seconds isn't fixing it for you add "mnt-nas.mount" to the following areas in the same file.



    [Unit]

    Description=Docker Application Container Engine

    Documentation=https://docs.docker.com

    After=network-online.target firewalld.service containerd.service mnt-nas.mount

    Wants=network-online.target mnt-nas.mount

    Requires=docker.socket containerd.service mnt-nas.mount



    This should fix it. At least it fixed it for me.

    So i added a 8tb drive to my software raid array but when I go to resize the drive its not working the array is still the same size.


    I went into the OS found the 8tb drive and did a wipe.


    then I went into the raid manager and added the hard drive to the software raid.


    from there I clicked on file systems and clicked on my array and hit resize and nothing happened.


    The setup is 4x 8tb RAID 5

    but it only shows 16tb free not 24tb.


    I then went into the command line and did the following.


    resize2fs /dev/mdxXX


    what came back was "file system already. nothing to do."


    did I hit some kinda of limit to the software raid?