Posts by chrisq

    I'm currently copying all my data from elsewhere to this zfs pool into a few of the datasets. Do you know where I can change the scrub settings? I don't see anything in crontab on omv or in my root crontab.

    I'm on omv 4.1.31-1. I installed proxmox kernel and zfs, created a zpool manually (for more control over how the raidz2 was created) and it showed up in the openzfs plugin in omv so I haven't touched anything in the plugin itself other than to note that my zpool was there (and the underlying datasets) and have been using it with a few bumps (bad cable causing udma cmc error that is now fixed). When I look at zpool history there's constant spamming in there with zfs set commands, I assume from the plugin and I'm curious how to get this to stop.


    An example of the spam, this is just the latest it happens every minute or two:

    Also, it set off a zfs scrub last night, where do I configure when that should happen, as I don't see anything in the scheduled jobs plugin or in the zfs plugin for that and would like to make sure it doesn't happen again until I'm done moving all my data to the drive.

    macom, I was actually messing around with this after working through getting mayan running and I believe I have a passable compose file below t hat works with my swarm enabled docker that takes v 3+ yml files. I'm just in the process of logging in and configuring but the below got it up and running. I hacked the env file into the yml since I don't know how to point portainer to an env file and didn't want to manually input env variables (I basically know enough to be dangerous to myself and other here, I am not advocating anyone use this, I'm just posting in case it helps someone or in case someone notices something truly messed up below and has suggestions):

    I ended up running "docker swarm init" on the CLI so that create stack would use "docker stack deploy" instead of "docker compose" which got mayan working but ended up being a bit obnoxious because I had to update it to version 3 of the yml, which made health checks needlessly complex. I ended up just ignoring the health check and making it depend on the start of the other packages but not on the health check, which appears to have gotten mayan up and running.


    My 3.0 yml:

    I'm on the latest OMV4 and I get the following error trying to deploy the stack on portainer. I'm guessing I'm out of luck because the docker/portainer in omv4 are too old? I only get this error after I comment out the version info, which stops it from running as well.

    Files

    • Capture.PNG

      (33.48 kB, downloaded 53 times, last: )

    I'll take a look at mayan. In my original search for a document manager paper seemed to be more popular and have the right amount of complexity for my needs while mayan seemed to be way overkill and not as developed (although looking on their gitlab page I now see that wherever said they had their last commit 2 years ago was incorrect).

    so I modified the compose file as follows and deleted the reference to the env file and just added the environmental variables but I get "deployment error yaml:unmarshal errors: line 1: cannot unmarshal !!str '2.1' into config.RawService is this because it's 2.1 and not version 2.0? I tried just changing the version number with no luck.


    version: '2.1'


    services:
    webserver:
    build: ./
    # uncomment the following line to start automatically on system boot
    restart: always
    ports:
    # You can adapt the port you want Paperless to listen on by
    # modifying the part before the `:`.
    - "8001:8000"
    healthcheck:
    test: ["CMD", "curl" , "-f", "http://localhost:8001"]
    interval: 30s
    timeout: 10s
    retries: 5
    volumes:
    - /srv/dev-disk-by-label-bigraid/dockerconf/paperless/data:/usr/src/paperless/data
    - /srv/dev-disk-by-label-bigraid/dockerconf/paperless/media:/usr/src/paperless/media
    # You have to adapt the local path you want the consumption
    # directory to mount to by modifying the part before the ':'.
    - /srv/dev-disk-by-label-bigraid/paperless:/consume
    # The reason the line is here is so that the webserver that doesn't do
    # any text recognition and doesn't have to install unnecessary
    # languages the user might have set in the env-file by overwriting the
    # value with nothing.
    environment:
    - PAPERLESS_OCR_LANGUAGES=eng
    command: ["gunicorn", "-b", "0.0.0.0:8000"]


    consumer:
    build: ./
    # uncomment the following line to start automatically on system boot
    restart: always
    depends_on:
    webserver:
    condition: service_healthy
    volumes:
    - /srv/dev-disk-by-label-bigraid/dockerconf/paperless/data:/usr/src/paperless/data
    - /srv/dev-disk-by-label-bigraid/dockerconf/paperless/media:/usr/src/paperless/media
    # This should be set to the same value as the consume directory
    # in the webserver service above.
    - /srv/dev-disk-by-label-bigraid/paperless:/consume
    # Likewise, you can add a local path to mount a directory for
    # exporting. This is not strictly needed for paperless to
    # function, only if you're exporting your files: uncomment
    # it and fill in a local path if you know you're going to
    # want to export your documents.
    # - /path/to/another/arbitrary/place:/export
    env_file: docker-compose.env
    command: ["document_consumer"]


    volumes:
    data:
    media:

    Hi,
    I'd love to get paperless working in OMV 4 (https://hub.docker.com/r/thepaperlessproject/paperless) but the docs for setup use docker compose so I'm at a bit of a loss what I need to put in to omv's docker setup to get it to run on omv. I found a howto for unraid (https://forums.unraid.net/topi…perless-dockerhub-unraid/) but it's a bit complicated involving running two separate dockers. I'd really appreciate some pointers on how to adapt the howto to OMV. If technodadlife happened to be willing to do a video howto that would be most epic of him.

    This isn't as big a disaster as I thought now that I've used this for a bit since all the stuff on my bigdisk is not affected and all the dockers/plugins/cifs use the bigdisk directly so I'm going to give up on fixing this and just keep lurking until a surefire fix is in. I tried the stuff in 10 and I don't think it works for me because I'm not using zfs so waiting for zfs doesn't work (I could be wrong).

    it's 3 12TB drives made into a bigdisk with the unionfs plugin. It previously worked. The individual disks and the bigdisk all show up in /srv/ and area readable there. I ran omv-mkconf hdparm as part of running everything but the disks are mounted, just these addon /sharedfolder directories are not mounting (I looked in fstab, the stuff that works in /srv/ is in there, the sharedfolder stuff is not, I'm assuming they are generated outside of the normal linux way with some omv magic?)

    Hi,


    So I tried to setup a bond ethernet connection, which ended up being a big disaster requiring command line intervention. I had a "configuration not commit, revert or commit" issues so I went through and ran omv-conf everything try to get rid of it (this was probably dumb) anyways, after getting that sorted, no every time I reboot the system I lose all my sharedfolder mounts. They instantly all recreate if I add or remove a mount. Does anyone have any suggestions on how to make this permanent again? I took a peak in the config.xml file without changing anything and as near as I can tell everything is in there, so it's a bit perplexing that this isn't persistent when it was before this bonded ethernet adventure/disaster.


    Thanks in advance,


    Chris