Posts by Keif888

    I have added a pull request https://github.com/OpenMediaVa…mediavault-scripts/pull/5 which has my attempt to add the capability mentioned above.

    I have tested it on my instance of OpenMediaVault, which was a learning experience in itself.


    For others wanting to add new variables into the json config files, you have to run a monit restart omv-engined before attempting to use the UI with the new fields. Otherwise you get something like this:

    And that error is thrown even though the tool to validate the omv database worked omv-showkey logretentiontype.


    Other things to take into account.

    After updating the yaml that generates the front end, you have to run the command(s) as documented here in the OpenMediaVault documentation.

    Hi,

    The plugin makes it easy to manage my scripts :)


    Is it possible to have the /etc/logrotate.d/omv-scripts-exec-wrapper-logs settings able to be updated in the scripts settings tab?


    I have made manual changes to the settings file, but I don't know if the plugin will undo my manual changes.


    The current configuration is to keep the log files for 90 months, which seems excessive. That assumes that I haven't miss understood the logrotate man page.


    Being able to change between monthly, weekly and daily, and then set the number of rotations will allow the end user to choose how long the log files are kept.

    My personal settings would be as follows:

    • omv-scripts-exec-tracker.log
      • I have changed monthly to daily
        • which probably doesn't need to be end user controlled
    • omv-scripts-exec-tracker/*.log
      • monthly to weekly
      • rotate 90 to rotate 4
      • notifempty to ifempty
        • which should be end user controllable)

    I went with ifempty because most executions of my scripts do not return any results, and I want the files to be removed after the rotation period.

    No, my settings in the Kubernetes setting tab are:

    Enabled - Yes

    Datastore - SQLite

    Snapshots - a folder that is only used for snapshots

    Certificate - Yes, generated with openssl

    Load Balancer

    http - 8080

    https - 8443

    Dashboard

    port - 4443


    I have setup a script that runs on reboot (all within OMV using the Scripts plugin) as follows, as the kubernetes-dashboard-kong deployment never works after reboot. But if I scale it down and back up, it then works. I have no idea what the underlying cause is though.


    Bash
    #!/bin/bash
    sleep 600
    if journalctl --since "65 seconds ago" SYSLOG_IDENTIFIER=k3s -g "restarting failed container=proxy pod=kubernetes-dashboard-kong" | grep k3s; then
        kubectl scale -n kubernetes-dashboard deployment kubernetes-dashboard-kong --replicas=0
        sleep 15
        kubectl scale -n kubernetes-dashboard deployment kubernetes-dashboard-kong --replicas=1
    fi

    I ended up updating my kubernetes-dashboard-kong deployment and turning the kong admin off as shown below, as changing the port failed on the next reboot.

    Hi,

    I have had a problem with the kubernetes-dashboard-kong pod going into a CrashLoopBackOff status with the log output as follows:



    This appears to be because the port that is being used for KONG_ADMIN_LISTEN in the default deployment is 8444 which is already in use as per this issue https://github.com/kubernetes/dashboard/issues/8765


    I manually worked around this by updating my deployment to use port 8445 and using kubectl apply to push the update through. I couldn't work out how to run the helm command mentioned in the issue above.


    Is it possible to update the kubernetes plugin to apply the fix noted in the issue above?

    Or update the deployment to change the port away from 8444?


    Thanks.

    I didn't like my workaround above, so I learnt some more about kubernetes.


    I have created a pull request to fix the underlying issue of the cert-manager regenerating the certificate that has been reported in this thread.

    fix: k8s certificates assigned through settings are overwritten by cert-manager by keif888 · Pull Request #1973 · openmediavault/openmediavault
    Fixes: #1972 References issue Includes tests for new functionality or reproducer for bug To Reproduce Steps to reproduce the behavior: Generate…
    github.com


    This fix works because when a certificate is configured in OMV for k8s, OMV will use a different secret, and assigns traefik to use that secret. This removes the issue of the default-tls-cert being regenerated, because OMV isn't using the default-tls-cert.

    If the certificate is removed from the k8s settings, then it reverts to using the default-tls-cert.

    I've had no luck getting a openssl generated cert from within OMV and into k8s using the suggestions above.

    Every time it attempts to apply the changes to default-tls-cert, the cert-manager stomps back over the top of the applied default-tls-cert with a newly generated cert. I have also tried the k3s instructions (Using Custom CA Certificates) to add my own ca certs, but that also wasn't working, it still generated non trusted certs. (I'm a kubernetes newbie) My ca was there, and I could see it in the Config Maps, but it wasn't used.


    My workaround is to change the scale of the cert-manager to 0, and then reapply the certificate in OMV.

    Connecting to the omv host I ran the following (although I was root at the time):

    Code
    sudo kubectl scale -n cert-manager deployment cert-manager --replicas=0


    Solution

    1. reduce the cert-manager replicas to 0

    2. In the k8s plugin settings page, change the certificate option to your desired certificate

    3. Apply (either UI or omv-salt deploy run --append-dirty)


    I do not know what impact "disabling" the cert-manager pod will have over time.