Posts by mirada

    Entering the password seems to work. If I enter the wrong one, the prompt keeps coming up. It only disappears when I type print/print. But then the error message appears (full black window, white letters, upper left corner) :

    Internal Server Error

    I've changed things quite often. And yes, I've read the references to 'Docker in OMV 7' and '(Docker) Compose Plugin For OMV7' on wiki.omv-extras.org (and some other sites in the WWW).

    Result of "check" :

    As recommended, I also tried loading the Portainer image.

    The Portainer page opens, but an error appears in the upper right corner (related?) :


    Maybe it's the same construction site ... ?


    ======================

    Correction / Edit :

    I changed the port from 9000:9000 to 9443:9443. Now I can access the Portainer site via https. Create an Account.


    ======================

    Correction / Edit (2th) :

    Now I changed back the port from 9443:9443 to 9000:9000. Now I can access the Portainer site without https.

    Easy over my.omv.ip:9000 and can log in too.


    strange & curious ...


    but cups still doesn't work :(

    I now want to install a print server on my OMV7 and have decided on Docker/Compose. Under OMV5, I had CUPS installed natively, which worked well. I've tried various Docker images, but none of them work for CUPS... Most of them require a username/password, but no combination I know of has been accepted.

    Than I found my currently following image:

    (From here)


    The page 192.xxx.x.xx:631/ opens (OpenPrinting CUPS 2.4.11), but after clicking on 'Administration (Verwaltung)' and entering the username/password (print/print), only 'Internal Server Error' appears in the Window.


    Example click on 'Klassen' :

    Classes

    Classes Error

    Unable to get class list

    Quote
    Bad file descriptor


    Maybe I have an error in the file or user permissions (appuser) or an incorrect entry in the shared folders...


    Need Help ...


    Why would I give you if not to run it?

    I thought you needed the output of the first command first for your "go".


    There never was a button to migrate data from portainer.

    Is this post perhaps outdated?

    In the Video I saw the Button "[Create from example]"


    Just search the forum

    I am already doing this diligently and want to learn :)


    and last but not least the result of 'fix6to7upgrade' :

    ... and all the Plugins in the GUI now have 7.xxx !!


    Thx a lot , Soma !

    Code
    root@omv:~# dpkg-l | grep openm
    -bash: dpkg-l: Kommando nicht gefunden.

    helpful ? :



    , run:

    I haven't seen this command yet...
    Can I run it?

    But first: Does the new version also have a button to migrate data from Portainer to Compose like in version 6?

    I've now upgrade OMV from 6 to 7.

    Since cups no longer works, I have to do it with Docker. However, I'm not sure if the omv-extras is up to date. The manuals mention compose 7. Here, I'm only shown compose 6. What else do I need to do?

     

    I found <THIS> now here in the forum. It was indeed unofficial and somewhat questionable. Apparently, the provider discontinued the app or switched from the Google-Play-Store to APKPure.

    As already mentioned in the post, you don't have full access to the data, only to the GUI. But you're right about security. I'll delete what's left of it.

    It's a shame that something like this isn't officially offered. It would be nice to be able to quickly configure or check something on the go using your smartphone...

    I'm having the same problem with "Panic or segfault in Samba" in Emails but on two similar systems running OMV 6 :


    One mini-PC has been running smoothly (24/7) since around 2023, and I upgraded the other (desktop PC) from 5 to 6 around the end of last year. Both send this warning via email every time a file is copied in File Explorer - regardless of whether it's Linux or Windows. If 100 files are copied, 100 emails arrive! This problem started around January - I don't know exactly...

    I've been waiting for an update to fix this - but nothing...

    There are further hints here in the forum, but no solution.

    Nach einigen Versuchen habe ich es hin bekommen und gebe mir mal wieder selbst die Antwort:


    die neue größere 3TB-Platte habe ich an einen anderen PC (Linux Mint) gehängt und mit GParted überprüft.

    Die Datenträgerbezeichnung hatte die UUID übernommen. Den Namen habe ich umbenannt (wie es vorher war) in maxtor.

    Nach dem Anschließen an OMV wurde die neue Festplatte ganau so erkannt wie die Alte.

    Was es damit zu tun hat (eigentlich nichts...) kann ich leider nicht sagen. Jedenfalls geht es jetzt !

    Hallo,


    ich habe an meinem Mini-PC die Daten-Festplatte aufgerüstet. Von USB2 nach USB3 (anderer Anschluss am PC) und von 500 GB auf 3 TB.

    Die Festplatte habe ich mit dd 1:1 geklont und mit GParted auf den freien Speicher erweitert. Die Bezeichnung ist noch die Gleiche wie vorher (sdb, Pfad, UUID & PARTUUID) :


    I upgraded the data hard drive on my mini PC. From USB2 to USB3 (different connection on the PC) and from 500 GB to 3 TB.

    I cloned the hard drive 1:1 with dd and expanded it to the free space with GParted. The name is still the same as before (sdb, path, UUID & PARTUUID):

    Dennoch wird die neue Festplatte nicht eingehängt.

    in der GUI bei "Datenspeicher - Laufwerke" wird die Festplatte mit "/dev/sdb" angezeigt. Der Rest dahinter ist natürlich anders als bei der alten Festplatte.

    Bekomme ich die noch ohne viel Aufwand wieder eingebunden, oder muss ich den Datenspeicher neu referenzieren / einbinden?


    However, the new hard drive is not mounted.

    in the GUI under "Storage - Disks" the hard drive is shown as "/dev/sdb". The rest behind of course is different from the old hard drive.

    Can I mount it again without much effort, or do I have to reference/mount the data storage again?


    Gruß / Reg.

    mirada

    I think it looks good ...

    The message at the top of the field " Storage I Software RAID I Details " (mdadm: Unknown keyword INACTIVE-ARRAY) - now it's gone!

    I'll keep an eye on the whole thing and give you feedback.

    Output :


    root@omv:~# omv-salt deploy run mdadm initramfs

    debian:

    ----------

    ID: remove_cron_daily_mdadm

    Function: file.absent

    Name: /etc/cron.daily/mdadm

    Result: True

    Comment: File /etc/cron.daily/mdadm is not present

    Started: 01:07:20.063148

    Duration: 0.864 ms

    Changes:

    ----------

    ID: divert_cron_daily_mdadm

    Function: omv_dpkg.divert_add

    Name: /etc/cron.daily/mdadm

    Result: True

    Comment: Leaving 'local diversion of /etc/cron.daily/mdadm to /etc/cron.dai ly/mdadm.distrib'

    Started: 01:07:20.064662

    Duration: 13.07 ms

    Changes:

    ----------

    ID: configure_default_mdadm

    Function: file.managed

    Name: /etc/default/mdadm

    Result: True

    Comment: File /etc/default/mdadm updated

    Started: 01:07:20.078009

    Duration: 117.321 ms

    Changes:

    ----------

    diff:

    ---

    +++

    @@ -1,20 +1,21 @@

    -# mdadm Debian configuration

    -#

    -# You can run 'dpkg-reconfigure mdadm' to modify the values i n this file, if

    -# you want. You can also change the values here and changes w ill be preserved.

    -# Do note that only the values are preserved; the rest of the file is

    -# rewritten.

    -#

    +# This file is auto-generated by openmediavault (https://www. openmediavault.org)

    +# WARNING: Do not edit this file, your changes will get lost.

    +

    +# INITRDSTART:

    +# list of arrays (or 'all') to start automatically when the initial ramdisk

    +# loads. This list *must* include the array holding your ro ot filesystem. Use

    +# 'none' to prevent any array from being started from the i nitial ramdisk.

    +#INITRDSTART='none'

    +

    +# AUTOSTART:

    +# should mdadm start arrays listed in /etc/mdadm/mdadm.conf automatically

    +# during boot?

    +AUTOSTART=true


    # AUTOCHECK:

    # should mdadm run periodic redundancy checks over your arr ays? See

    # /etc/cron.d/mdadm.

    AUTOCHECK=true

    -

    -# AUTOSCAN:

    -# should mdadm check once a day for degraded arrays? See

    -# /etc/cron.daily/mdadm.

    -AUTOSCAN=true


    # START_DAEMON:

    # should mdadm start the MD monitoring daemon during boot?

    ----------

    ID: configure_mdadm_conf

    Function: file.managed

    Name: /etc/mdadm/mdadm.conf

    Result: True

    Comment: File /etc/mdadm/mdadm.conf updated

    Started: 01:07:20.195451

    Duration: 118.061 ms

    Changes:

    ----------

    diff:

    ---

    +++

    @@ -23,6 +23,3 @@

    MAILFROM root


    # definitions of existing MD arrays

    -ARRAY /dev/md/OMV:0 metadata=1.2 name=OMV:0 UUID=71753bdb:548 ea668:3af683b6:31435b44

    -INACTIVE-ARRAY /dev/md127 metadata=1.1 name=NAS-SYNC-1:1 UUID =b4c741a8:a463898b:cf1c72ea:f540dd7f

    -INACTIVE-ARRAY /dev/md126 metadata=1.1 name=ix4-300d:0 UUID=5 f2c2150:a36b9e26:f980de9f:84d7fff4

    ----------

    ID: mdadm_save_config

    Function: cmd.run

    Name: mdadm --detail --scan >> /etc/mdadm/mdadm.conf

    Result: True

    Comment: Command "mdadm --detail --scan >> /etc/mdadm/mdadm.conf" run

    Started: 01:07:20.314772

    Duration: 8.977 ms

    Changes:

    ----------

    pid:

    8139

    retcode:

    0

    stderr:

    stdout:

    ----------

    ID: update_initramfs_nop

    Function: test.nop

    Result: True

    Comment: Success!

    Started: 01:07:20.325250

    Duration: 0.605 ms

    Changes:

    ----------

    ID: update_initramfs

    Function: cmd.run

    Name: update-initramfs -u

    Result: True

    Comment: Command "update-initramfs -u" run

    Started: 01:07:20.325957

    Duration: 20986.984 ms

    Changes:

    ----------

    pid:

    8141

    retcode:

    0

    stderr:

    stdout:

    update-initramfs: Generating /boot/initrd.img-6.1.0-0.deb11.21 -amd64


    Summary for debian

    ------------

    Succeeded: 7 (changed=4)

    Failed: 0

    ------------

    Total states run: 7

    Total run time: 21.246 s

    I have finally updated my omv5 to omv6. Everything seems to be working normally. However, I get the following message in my email:


    /etc/cron.daily/openmediavault-mdadm:

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: Unknown keyword INACTIVE-ARRAY



    some infos:


    --------------

    root@omv:~# cat /proc/mdstat

    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [ra id10]

    md0 : active raid1 sdc[1] sdd[0]

    2930135360 blocks super 1.2 [2/2] [UU]


    unused devices: <none>

    --------------

    root@omv:~# blkid

    /dev/sdb1: UUID="ce2da1ef-7948-445e-98fc-fd6c73baff70" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="0de79efb-2be5-4b38-9d11-2a795c81f365"

    /dev/sda1: LABEL="Tosh1" UUID="fb1e45f4-ea1c-4de9-b916-189ef60fe594" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="629f2b40-044e-4a27-aef4-ba1a56ac30ee"

    /dev/sde1: UUID="afcb2e95-3261-4ee7-aca5-2a0b59efecbb" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="ee989c28-01"

    /dev/sde5: UUID="a0b829ac-79ae-4476-bda4-b94f0cf6ab14" TYPE="swap" PARTUUID="ee989c28-05"

    /dev/sdd: UUID="71753bdb-548e-a668-3af6-83b631435b44" UUID_SUB="f2fde8d8-06e3-ff69-742e-15d805966938" LABEL="OMV:0" TYPE="linux_raid_member"

    /dev/sdc: UUID="71753bdb-548e-a668-3af6-83b631435b44" UUID_SUB="d3568d19-04d9-bab2-ccc0-811a8c2c27da" LABEL="OMV:0" TYPE="linux_raid_member"

    /dev/md0: LABEL="Server" UUID="ea51eaa0-83a4-4fd9-8ff9-63fd17095186" BLOCK_SIZE="4096" TYPE="ext4"

    --------------

    root@omv:~# fdisk -l | grep "Disk "

    Disk /dev/sdb: 2,73 TiB, 3000592982016 bytes, 5860533168 sectors

    Disk model: WDC WD30EFRX-68E

    Disk identifier: 22AE76A3-E787-4F5F-9503-3946B592E5EC

    Disk /dev/sda: 2,73 TiB, 3000592982016 bytes, 5860533168 sectors

    Disk model: WDC WD30EFRX-68E

    Disk identifier: ED39D5EE-12DD-4D03-BBF0-E95D91579DC8

    Disk /dev/sde: 111,79 GiB, 120034123776 bytes, 234441648 sectors

    Disk model: KINGSTON SA400S3

    Disk identifier: 0xee989c28

    Disk /dev/sdd: 2,73 TiB, 3000592982016 bytes, 5860533168 sectors

    Disk model: WDC WD30EFRX-68E

    Disk /dev/sdc: 2,73 TiB, 3000592982016 bytes, 5860533168 sectors

    Disk model: WDC WD30EFRX-68E

    Disk /dev/md0: 2,73 TiB, 3000458608640 bytes, 5860270720 sectors

    --------------

    root@omv:~# cat /etc/mdadm/mdadm.conf

    # This file is auto-generated by openmediavault (https://www.openmediavault.org)

    # WARNING: Do not edit this file, your changes will get lost.


    # mdadm.conf

    #

    # Please refer to mdadm.conf(5) for information about this file.

    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.

    # alternatively, specify devices to scan, using wildcards if desired.

    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.

    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is

    # used if no RAID devices are configured.

    DEVICE partitions


    # auto-create devices with Debian standard permissions

    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system

    HOMEHOST <system>

    # instruct the monitoring daemon where to send mail alerts

    MAILADDR xxxxxxxx@xxx.de

    MAILFROM root


    # definitions of existing MD arrays

    ARRAY /dev/md/OMV:0 metadata=1.2 name=OMV:0 UUID=71753bdb:548ea668:3af683b6:31435b44

    INACTIVE-ARRAY /dev/md127 metadata=1.1 name=NAS-SYNC-1:1 UUID=b4c741a8:a463898b:cf1c72ea:f540dd7f

    INACTIVE-ARRAY /dev/md126 metadata=1.1 name=ix4-300d:0 UUID=5f2c2150:a36b9e26:f980de9f:84d7fff4

    --------------

    root@omv:~# mdadm --detail --scan --verbose

    mdadm: Unknown keyword INACTIVE-ARRAY

    mdadm: Unknown keyword INACTIVE-ARRAY

    ARRAY /dev/md/OMV:0 level=raid1 num-devices=2 metadata=1.2 name=OMV:0 UUID=71753bdb:548ea668:3af683b6:31435b44

    devices=/dev/sdc,/dev/sdd


    what do i have to do ?


    greetings

    mirada

    Hallo,


    wie sieht es aktuell mit der Installation von TVHeadend auf OMV6 aus?

    Native? (geht' so ? : Link - Debian) - nur über repository oder auch einfach über apt-get install tvheadend ?

    Docker? - mit Docker habe ich noch keine Erfahrung...

    Die Seite doozer.io scheint mal wieder nicht vorhanden...


    Anbindung über SAT>IP - also Signal übers Netzwerk.

    Aufnahme auf eine seperate USB-Festplatte.


    Gruß



    english:

    Hi,

    what is the current situation with the installation of TVHeadend on OMV6?

    Native? (is this OK ? : Link- Debian) - only via repository or simply via apt-get install tvheadend ?

    Docker? - I have no experience with Docker yet...

    The page doozer.io seems to be missing again...

    Connection via SAT>IP - i.e. signal via the network.

    Recording on a separate USB hard drive.

    greeting

    grep Model /proc/cpuinfo

    Model : Raspberry Pi Model B Plus Rev 1.2

    What is the error?

    m@raspberry:~ $ wget -O - https://github.com/OpenMediaVa…Script/raw/master/install | sudo bash

    --2023-04-13 00:00:25-- https://github.com/OpenMediaVa…Script/raw/master/install

    Auflösen des Hostnamens github.com (github.com)… 140.82.121.3

    Verbindungsaufbau zu github.com (github.com)|140.82.121.3|:443 … verbunden.

    HTTP-Anforderung gesendet, auf Antwort wird gewartet … 302 Found

    Platz: https://raw.githubusercontent.…nstlScript/master/install [folgend]

    --2023-04-13 00:00:25-- https://raw.githubusercontent.…tallScript/master/install

    Auflösen des Hostnamens raw.githubusercontent.com (raw.githubusercontent.com)…606:50c0:8002::154, 2606:50c0:8003::154, 2606:50c0:8000::154, ...

    Verbindungsaufbau zu raw.githubusercontent.com (raw.githubusercontent.com)|26050c0:8002::154|:443 … verbunden.

    HTTP-Anforderung gesendet, auf Antwort wird gewartet … 200 OK

    Länge: 20388 (20K) [text/plain]

    Wird in »STDOUT« gespeichert.


    - 100%[===================>] 19,91K --.-KB/s in 0,02s


    2023-04-13 00:00:26 (878 KB/s) - auf die Standardausgabe geschrieben [20388/208]


    Current / permissions = 755

    New / permissions = 755

    Forcing IPv4 only for apt...

    Updating repos before installing...

    Hit:1 http://raspbian.raspberrypi.org/raspbian buster InRelease

    Hit:2 http://archive.raspberrypi.org/debian buster InRelease

    Reading package lists... Done

    Installing lsb_release...

    Reading package lists... Done

    Building dependency tree

    Reading state information... Done

    0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 0 not upgraded.

    Need to get 0 B/27.8 kB of archives.

    After this operation, 0 B of additional disk space will be used.

    (Reading database ... 61288 files and directories currently installed.)

    Preparing to unpack .../lsb-release_10.2019051400+rpi1_all.deb ...

    Unpacking lsb-release (10.2019051400+rpi1) over (10.2019051400+rpi1) ...

    Setting up lsb-release (10.2019051400+rpi1) ...

    Processing triggers for man-db (2.8.5-2) ...

    Supported architecture

    This version of OMV is End of Life. Please consider using OMV 6.x.

    Debian :: buster

    usul :: 5

    RPi revision code :: 0010

    This RPi1 is not supported (not true armhf). Exiting...

    Ich habe noch einen alten Raspberry Pi 2 Modell B und möchte diesen für interne Zwecke (nicht aus dem Internet erreichbar) als zentralen Speicher für Receiver-Aufnahmen und TVHeadend nutzen. Die Geschwindigkeit reicht völlig aus. Über samba und DLNA funktioniert der Zugriff auch - aber ich vermisse verschiedene Funktionen, welche mit OMV besser zu realisieren wären (ich habe OMV 5 schon auf einem Server laufen, der aber nur bei Bedarf eingeschaltet wird).

    Aktuell gibt es für den Raspberry Pi nur den Download über den 'Raspberry Pi Imager'. Hier habe ich mir die 32 bit Legacy Lite aufgespielt. Allerdings lässt sich OMV da aber nicht mit

    Code
    wget -O - https://github.com/OpenMediaVault-Plugin-Developers/installScript/raw/master/install | sudo bash

    aufspielen. Der Raspi ist wohl zu alt.

    Gibt es noch eine Möglichkeit, irgendwo ein OMV-img. für die alte Kiste zu bekommen ?


    VG

    mirada