Posts by etrigan63

    etrigan63 wrote: Docker supposedly has a dns function for this purpose but I have yet to experience it.

    To work you need a new network for your docker ( need to docker network create) and add it to your dockers like --network nameofmyprevioslycreatednetwork


    default bridge network do not resolve DNS names like nameofthedocker.


    more info:


    https://docs.docker.com/v17.09…he-default-bridge-network


    https://docs.docker.com/v17.09…rking/work-with-networks/[/quote]Yeah, did that. No change. Can only access via IP address.


    Sent from my ONEPLUS A6013 using Tapatalk

    Ww


    Sent from my ONEPLUS A6013 using Tapatalk

    Regarding OMV 5 and MergerFS, see this post: Error creating shared folder on mergerfs drives


    Regarding Traefik, the entire point is that is doesn't resolve. Some key element is missing that everyone assumes works and such is not the case. My server is mechagon.local. I can ping it from my workstation (Linux) and it resolves to the host IP. If I type container.mechagon.local, 404. Everything I read says this should happen by itself and it does not. Also, I am not clear on how Traefik is supposed to interact with the other containers. Do they need to publish ports on the bridge? Traefik is supposed to have a public facing network but it is not working as expected. I need a TDL level set of instructions to make it work. Oh, and for the record, I can only get Traefik to deploy marginally correctly using Portainer stacks (docker-compose). The GUI way results in a borked container.

    I am switching back to OMV 4 today as MergerFS is buggered on OMV 5. I will move the web GUI to a new port.


    Regarding having a DE, traefik uses host headers to route requests to the appropriate container. For example, if I expose a Heimdall container on docker.local it should present as heimdall.docker.local and I imagine it would on the docker host. OMV users can't test that way. Docker supposedly has a dns function for this purpose but I have yet to experience it.


    Sent from my ONEPLUS A6013 using Tapatalk

    I am trying to implement Traefik as the reverse-proxy on my OMV server. All of the tutorials I have read assume you have a desktop environment on the Docker server and that port 80 is available. Neither is the case with OMV. I also cannot connect using the host header names (regardless of port used) from any machine. All of my connectivity is via mapped ports of the host IP address. I have three more NICs in my OMV server (Dell R710) but activating them is a fiasco. Guidance on the proper way to do this is appreciated. Docker-compose scripts would be a plus as I can use them as stacks in Portainer.

    journalctl -xe returns this:


    -- Defined-By: systemd
    -- Support: https://www.debian.org/support
    --
    -- A stop job for unit sharedfolders-docker.mount has finished.
    --
    -- The job identifier is 3041 and the job result is failed.
    Sep 14 01:40:38 mechagon.echenique.local systemd[1]: run-r8d4ca90acaa54b76bf7551408244e20b.scope: Succeeded.
    -- Subject: Unit succeeded
    -- Defined-By: systemd
    -- Support: https://www.debian.org/support
    --
    -- The unit run-r8d4ca90acaa54b76bf7551408244e20b.scope has successfully entered the 'dead' state.
    Sep 14 01:40:38 mechagon.echenique.local systemd[1]: Started /bin/systemctl restart sharedfolders-media.mount.
    -- Subject: A start job for unit run-rb4aaa346cb3b4873952a97344caaa596.scope has finished successfully
    -- Defined-By: systemd
    -- Support: https://www.debian.org/support
    --
    -- A start job for unit run-rb4aaa346cb3b4873952a97344caaa596.scope has finished successfully.
    --
    -- The job identifier is 3064.
    Sep 14 01:40:38 mechagon.echenique.local systemd[1]: Unmounting Mount shared folder media to /sharedfolders/media...
    -- Subject: A stop job for unit sharedfolders-media.mount has begun execution
    -- Defined-By: systemd
    -- Support: https://www.debian.org/support
    --
    -- A stop job for unit sharedfolders-media.mount has begun execution.
    --
    -- The job identifier is 3068.
    Sep 14 01:40:38 mechagon.echenique.local systemd[1]: sharedfolders-media.mount: Succeeded.
    -- Subject: Unit succeeded
    -- Defined-By: systemd
    -- Support: https://www.debian.org/support
    --
    -- The unit sharedfolders-media.mount has successfully entered the 'dead' state.
    Sep 14 01:40:38 mechagon.echenique.local systemd[1]: Unmounted Mount shared folder media to /sharedfolders/media.
    -- Subject: A stop job for unit sharedfolders-media.mount has finished
    -- Defined-By: systemd
    -- Support: https://www.debian.org/support
    --
    -- A stop job for unit sharedfolders-media.mount has finished.
    --
    -- The job identifier is 3068 and the job result is done.
    Sep 14 01:40:38 mechagon.echenique.local systemd[1]: Mounting Mount shared folder media to /sharedfolders/media...
    -- Subject: A start job for unit sharedfolders-media.mount has begun execution
    -- Defined-By: systemd
    -- Support: https://www.debian.org/support
    --
    -- A start job for unit sharedfolders-media.mount has begun execution.
    --
    -- The job identifier is 3068.
    Sep 14 01:40:38 mechagon.echenique.local systemd[1]: Mounted Mount shared folder media to /sharedfolders/media.
    -- Subject: A start job for unit sharedfolders-media.mount has finished successfully
    -- Defined-By: systemd
    -- Support: https://www.debian.org/support
    --
    -- A start job for unit sharedfolders-media.mount has finished successfully.
    --
    -- The job identifier is 3068.
    Sep 14 01:40:38 mechagon.echenique.local systemd[1]: run-rb4aaa346cb3b4873952a97344caaa596.scope: Succeeded.
    -- Subject: Unit succeeded
    -- Defined-By: systemd
    -- Support: https://www.debian.org/support
    --
    -- The unit run-rb4aaa346cb3b4873952a97344caaa596.scope has successfully entered the 'dead' state.
    Sep 14 01:41:09 mechagon.echenique.local monit[1032]: Cannot create socket to [127.0.0.1]:25 -- Connection timed out
    Sep 14 01:41:09 mechagon.echenique.local monit[1032]: Cannot open a connection to the mailserver 127.0.0.1:25 -- Operation now in progress
    Sep 14 01:41:09 mechagon.echenique.local monit[1032]: Mail: Delivery failed -- no mail server is available
    Sep 14 01:41:09 mechagon.echenique.local monit[1032]: Alert handler failed, retry scheduled for next cycle
    Sep 14 01:41:09 mechagon.echenique.local monit[1032]: '\mechagon.echenique.local' loadavg (5min) of 0.1 matches resource limit [loadavg (5min) > -1223331328.0]
    Sep 14 01:41:14 mechagon.echenique.local postfix/smtpd[6487]: warning: dict_nis_init: NIS domain name not set - NIS lookups disabled
    Sep 14 01:41:14 mechagon.echenique.local postfix/smtpd[6487]: fatal: in parameter smtpd_relay_restrictions or smtpd_recipient_restrictions, specify at least one working instance of: reject_unauth_destination, defer_unauth_destination, reject, defer, defer_if_permit or check_relay_domains
    Sep 14 01:41:15 mechagon.echenique.local postfix/master[2192]: warning: process /usr/lib/postfix/sbin/smtpd pid 6487 exit status 1
    Sep 14 01:41:15 mechagon.echenique.local postfix/master[2192]: warning: /usr/lib/postfix/sbin/smtpd: bad command startup -- throttling
    lines 1341-1413/1413 (END)

    I am running OMV 5 latest build with the OMV-Extras Plugins enabled. I have docker, unionfs, and SnapRAID enabled.When I created the merged file system, OMV 5 gave me an error during the commit. I rebooted the server (an arduous process as my Dell R710 needs a new IDRAC 6 Express module and I have to hard power cycle it to reboot properly) and the mergerfs drive was there without errors. When I went to create a shared folder I got the following error during the commit:



    Error #0:OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run systemd 2>&1' with exit code '1': /usr/lib/python3/dist-packages/salt/modules/file.py:32: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working from collections import Iterable, Mapping/usr/lib/python3/dist-packages/salt/utils/decorators/signature.py:31: DeprecationWarning: `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly *salt.utils.args.get_function_argspec(original_function)/usr/lib/python3/dist-packages/salt/utils/decorators/signature.py:31: DeprecationWarning: `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly *salt.utils.args.get_function_argspec(original_function)debian:---------- ID: remove_sharedfolder_mount_unit_files Function: module.run Result: True Comment: file.find: ['/etc/systemd/system/sharedfolders-appdata.mount', '/etc/systemd/system/sharedfolders-databases.mount', '/etc/systemd/system/sharedfolders-docker.mount', '/etc/systemd/system/sharedfolders-media.mount'] Started: 01:40:36.819262 Duration: 3.412 ms Changes: ---------- file.find: - /etc/systemd/system/sharedfolders-appdata.mount - /etc/systemd/system/sharedfolders-databases.mount - /etc/systemd/system/sharedfolders-docker.mount - /etc/systemd/system/sharedfolders-media.mount---------- ID: configure_sharedfolder_appdata_mount_unit_file Function: file.managed Name: /etc/systemd/system/sharedfolders-appdata.mount Result: True Comment: File /etc/systemd/system/sharedfolders-appdata.mount updated Started: 01:40:36.824962 Duration: 3.301 ms Changes: ---------- diff: New file mode: 0644---------- ID: configure_sharedfolder_databases_mount_unit_file Function: file.managed Name: /etc/systemd/system/sharedfolders-databases.mount Result: True Comment: File /etc/systemd/system/sharedfolders-databases.mount updated Started: 01:40:36.828376 Duration: 2.577 ms Changes: ---------- diff: New file mode: 0644---------- ID: configure_sharedfolder_docker_mount_unit_file Function: file.managed Name: /etc/systemd/system/sharedfolders-docker.mount Result: True Comment: File /etc/systemd/system/sharedfolders-docker.mount updated Started: 01:40:36.831117 Duration: 2.556 ms Changes: ---------- diff: New file mode: 0644---------- ID: configure_sharedfolder_media_mount_unit_file Function: file.managed Name: /etc/systemd/system/sharedfolders-media.mount Result: True Comment: File /etc/systemd/system/sharedfolders-media.mount updated Started: 01:40:36.833786 Duration: 2.635 ms Changes: ---------- diff: New file mode: 0644---------- ID: sharedfolder_mount_units_systemctl_daemon_reload Function: module.run Name: service.systemctl_reload Result: True Comment: Started: 01:40:36.836534 Duration: 0.356 ms Changes:---------- ID: enable_sharedfolder_appdata_mount_unit Function: service.enabled Name: sharedfolders-appdata.mount Result: True Comment: Service sharedfolders-appdata.mount is already enabled, and is in the desired state Started: 01:40:38.041974 Duration: 476.243 ms Changes:---------- ID: restart_sharedfolder_appdata_mount_unit Function: module.run Result: True Comment: service.restart: True Started: 01:40:38.518551 Duration: 51.811 ms Changes: ---------- service.restart: True---------- ID: enable_sharedfolder_databases_mount_unit Function: service.enabled Name: sharedfolders-databases.mount Result: True Comment: Service sharedfolders-databases.mount is already enabled, and is in the desired state Started: 01:40:38.570836 Duration: 29.045 ms Changes:---------- ID: restart_sharedfolder_databases_mount_unit Function: module.run Result: True Comment: service.restart: True Started: 01:40:38.600333 Duration: 57.413 ms Changes: ---------- service.restart: True---------- ID: enable_sharedfolder_docker_mount_unit Function: service.enabled Name: sharedfolders-docker.mount Result: True Comment: Service sharedfolders-docker.mount is already enabled, and is in the desired state Started: 01:40:38.658331 Duration: 29.749 ms Changes:---------- ID: restart_sharedfolder_docker_mount_unit Function: module.run Result: False Comment: An exception occurred in this state: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/salt/state.py", line 1919, in call **cdata['kwargs']) File "/usr/lib/python3/dist-packages/salt/loader.py", line 1918, in wrapper return f(*args, **kwargs) File "/usr/lib/python3/dist-packages/salt/utils/decorators/__init__.py", line 558, in _decorate return self._call_function(kwargs) File "/usr/lib/python3/dist-packages/salt/utils/decorators/__init__.py", line 263, in _call_function raise error File "/usr/lib/python3/dist-packages/salt/utils/decorators/__init__.py", line 250, in _call_function return self._function(*args, **kwargs) File "/usr/lib/python3/dist-packages/salt/states/module.py", line 294, in run func_args=kwargs.get(func)) File "/usr/lib/python3/dist-packages/salt/states/module.py", line 358, in _call_function mret = __salt__[name](*arg_type, **func_kw) File "/usr/lib/python3/dist-packages/salt/modules/systemd.py", line 906, in restart raise CommandExecutionError(_strip_scope(ret['stderr'])) salt.exceptions.CommandExecutionError: Job for sharedfolders-docker.mount failed. See "systemctl status sharedfolders-docker.mount" and "journalctl -xe" for details. Started: 01:40:38.688541 Duration: 38.622 ms Changes:---------- ID: enable_sharedfolder_media_mount_unit Function: service.enabled Name: sharedfolders-media.mount Result: True Comment: Service sharedfolders-media.mount is already enabled, and is in the desired state Started: 01:40:38.727677 Duration: 28.931 ms Changes:---------- ID: restart_sharedfolder_media_mount_unit Function: module.run Result: True Comment: service.restart: True Started: 01:40:38.757052 Duration: 58.266 ms Changes: ---------- service.restart: True---------- ID: configure_tmp_mount_unit_file Function: file.managed Name: /etc/systemd/system/tmp.mount Result: True Comment: File /etc/systemd/system/tmp.mount is in the correct state Started: 01:40:38.815782 Duration: 40.409 ms Changes:---------- ID: tmp_mount_unit_systemctl_daemon_reload Function: module.run Name: service.systemctl_reload Result: True Comment: State was not run because none of the onchanges reqs changed Started: 01:40:38.857181 Duration: 0.007 ms Changes:Summary for debian-------------Succeeded: 15 (changed=8)Failed: 1-------------Total states run: 16Total run time: 825.333 ms in /usr/share/php/openmediavault/system/process.inc:182Stack trace:#0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(60): OMV\System\Process->execute()#1 /usr/share/openmediavault/engined/rpc/config.inc(164): OMV\Engine\Module\ServiceAbstract->deploy()#2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array)#3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)#5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(588): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusQH...', '/tmp/bgoutputQH...')#6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))#7 /usr/share/openmediavault/engined/rpc/config.inc(186): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)#8 [internal function]: Engined\Rpc\Config->applyChangesBg(Array, Array)#9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)#10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)#11 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)#12 {main}



    Can anyone make heads or tails out of this gobbledygook?

    I just deployed a big honking OMV4 server to replace my i3 powered one. Dell R710 (2x Xeon 5520s, 72GB RAM, 240GB SSD (boot), 3TB RAID5 for hot data, 5+TB glacial storage using mergerfs +SnapRAID). I have several docker containers running on the i3. How do I migrate them to the new machine? I do have portainer installed on the i3.


    Sent from my ONEPLUS A6013 using Tapatalk

    Hey all,
    I have OMV4 running on an Intel Celeron NUC connected to a JMicron based HWRAID-5 enclosure. The RAID is managed by the enclosure, not OMV4. I had a faulty UPS for a while and power blinks would cause the NUC to power off but the array (having a hard wired power switch) would boot before the NUC. When I start the NUC, the boot process hangs on the detection of the external RAID. If left alone, it times out and the system boots without the RAID (which is the main storage array). If I am standing there and power cycle the array while the detection is happening, the NUC detects it and all is well in the world. This is obviously not normal. Any ideas?

    Not happening. No console. No way to mount it. No way to change it. I tried the Ubuntu image and that booted to a visible console. But when I tried to set the network by editing /etc/network/interfaces it ignored it and continued with DHCP. I give up. I am ordering an i3 NUC and will set it up with Jessie the normal way.

    If I were to mount the microSD card on a Linux machine, would I be able to edit the sshd-config file to allow me to log in as root?


    UPDATE:


    This did not work due to permissions on the disk image. It was made by user #502 so I can't mount it with write privileges. I have no idea what to do.

    OK, new card arrived from Amazon and I successfully imaged it. odroid-jesse booted up fine. However, I can't SSH in as root. I saw on the article that root access via SSH is restricted. So, just to be clear, I have to connect a console, login as root and create a user with sudo rights, then I can SSH in?

    Well that was a bust. So I decided to not be a bonehead and follow your instructions to the letter. I turned off the Odroid, pulled the microSD card and used Rufus to install the disk image you mentioned. Plugged it back in and powered the unit on and nothing happens. The fan spins and power light is on, but the NIC port does not light up. I had the USB3 drive enclosure powered down just in case. Did my XU4 just die?


    UPDATE


    Apparently my microSD card has been rendered useless. Paragon ExtFS for Windows cannot mount it nor does it appear in the Windows Disk Manager. Time to go get a new one.

    I started with a pre-built Wheezy/Stonburner image you provided on another thread.
    I used the following commands:

    However, the omv install failed due to lack of dependencies. That's when I asked about the dist-upgrade. I am not familiar with debian enough apparently. How do I change the sources to Jessie so I can dist-upgrade?

    The dependency problem is with the upgrade to Erasmus.


    The other is a separate issue I have encountered. Here is a link that claims to have solved it but I am not sure how they did it.


    http://raspberrypi.stackexchan…rive-changes-to-read-only


    UPDATE:


    apt-get dist-upgrade


    did nothing. No error. However, it did give the following informational message:


    The following packages have been kept back:
    openmediavault openmediavault-clamav php5-pam proftpd-mod-vroot
    0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded.

    I used the pre-built Odroid Xu4 build (Wheezy/Stoneburner) you provided in another thread. Can I just do a dist-upgrade on that and then reattempt the omv upgrade?


    The USB/EXT4 issue I encountered using the aforementioned build. I Googled around and encountered other folks running into this issue with other distros as well. I was wondering if it was because of EXT4 and if so, would an XFS filesystem solve the issue?

    I just tried this and got to the apt-get install openmediavault step when I got clobbered by a bunch of unmet dependencies. Also I have been encountering that my USB drive enclosure slips into read-only mode. I am using EXT4 file system. Perhaps I should reformat to XFS?