Beiträge von gtj

    I also get a similar error.


    My apt list inputs is:


    Code
    deb http://cdn.debian.net/debian/ stretch main contrib non-free
    deb-src http://cdn.debian.net/debian/ stretch main contrib non-free
    deb http://security.debian.org/ stretch/updates main contrib non-free
    deb-src http://security.debian.org/ stretch/updates main contrib non-free
    deb http://cdn.debian.net/debian/ stretch-updates main contrib non-free
    deb-src http://cdn.debian.net/debian/ stretch-updates main contrib non-free

    Can someone please help me to amend this?

    The drive is indicating bad sectors, the drive could last for quite some time, you could leave it in place and keep an eye on 197 and 198 if those begin to increase or you errors in 5, then replace the drive. Worst case scenario the drive fails and the raid will display as degraded, but usable until the drive is replaced.

    Great info. Thanks a bunch!

    Change the drive

    Thanks for the suggestion. Although I'm quite embarrassed as the drives are not that old and was hoping they would last me several years, I will make sure I get this in schedule and eventually exchange them.


    Out of curiosity what are the actual marks above indicating a hw failure?

    The you'll have to try from the cli assuming it's /dev/sda try mdadm --add /dev/md0 /dev/sda

    It started recovering! Thank you s much!

    How am I going to prevent this from happening again in the future?

    Safe shutdown the server and probably attach it to a UPS?


    So under raid management selecting the raid that's degraded the recover is greyed out? Is this USB?

    They are not USB attached. They are SATA.

    I'm sorry. My bad. It's not greyed out but if you click the ''Recover'' option, there's no drive or array to choose. Please refer to the attached screenshots.

    Hello,


    I have the exact same problem.


    I run 2 arrays in my OMV server. One is RAID0 which works fine and shows ''clean'' and a second Mirror one, RAID1 which shows ''clean, degraded''.

    RAID1 works OK too but my most important files are in that array so I'm a bit nervous with regards to its degraded state...

    The drives in the RAID1 array are 2 X WD RED 4TB.

    I'm under the impression that the degrade state has come up after a sudden power loss.


    cat /proc/mdstat


    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid0 sdd[1] sdc[0]
    976420864 blocks super 1.2 512k chunks

    md0 : active raid1 sdb[1]
    3906887488 blocks super 1.2 [2/1] [_U]
    bitmap: 20/30 pages [80KB], 65536KB chunk

    unused devices:


    blkid


    /dev/mmcblk1p6: SEC_TYPE="msdos" LABEL="boot" UUID="C0F8-560C" TYPE="vfat" PARTLABEL="boot" PARTUUID="32eef3ba-12f6-4212-84e2-5b0d76f4a993"
    /dev/mmcblk1p7: LABEL="linux-root" UUID="2554df01-b8d0-41c1-bdf7-b7d8cddce3b0" TYPE="ext4" PARTLABEL="root" PARTUUID="529dbef1-9df3-4bdd-ac34-01de150ad7d8"
    /dev/sda: UUID="7e4bee7e-c759-d5e9-3a7c-3b4f29675188" UUID_SUB="7b6b7f61-8872-f687-ffbf-3b896263b623" LABEL="rockpro64:0" TYPE="linux_raid_member"
    /dev/sdb: UUID="7e4bee7e-c759-d5e9-3a7c-3b4f29675188" UUID_SUB="4c994784-29f0-2972-bd52-9900cc19108d" LABEL="rockpro64:0" TYPE="linux_raid_member"
    /dev/md0: LABEL="RAID1" UUID="fa71c879-fd37-4fe1-8936-e5d730e3ac50" TYPE="ext4"
    /dev/sdc: UUID="b1196f1a-4e11-83fc-59f0-41f2cc3e4aec" UUID_SUB="0249ea81-7454-7012-9eed-8ad7df2e9549" LABEL="rockpro64:0" TYPE="linux_raid_member"
    /dev/md127: LABEL="RAID0" UUID="b14b3921-f446-4ca6-8d2d-360618d8f1f8" TYPE="ext4"
    /dev/sdd: UUID="b1196f1a-4e11-83fc-59f0-41f2cc3e4aec" UUID_SUB="de2dfc68-3105-b22c-6bb9-d5780b1e64d1" LABEL="rockpro64:0" TYPE="linux_raid_member"
    /dev/mmcblk1: PTUUID="86a3793b-4859-49c5-a070-0dd6149749ab" PTTYPE="gpt"
    /dev/mmcblk1p1: PARTLABEL="loader1" PARTUUID="762eed66-a529-4c13-904d-33b8ff2d163e"
    /dev/mmcblk1p2: PARTLABEL="reserved1" PARTUUID="c72e8fa0-8797-4c20-9ebd-3740dcbd03d4"
    /dev/mmcblk1p3: PARTLABEL="reserved2" PARTUUID="afc8a236-0220-4f9a-a5ab-016019452401"
    /dev/mmcblk1p4: PARTLABEL="loader2" PARTUUID="02cec096-2af6-46de-8007-2637eb8edc15"
    /dev/mmcblk1p5: PARTLABEL="atf" PARTUUID="effa07ab-4894-4803-8d36-3a52d1da9f44"

    cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #

    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions

    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes

    # automatically tag new arrays as belonging to the local system
    HOMEHOST

    # definitions of existing MD arrays
    ARRAY /dev/md/rockpro64:0 metadata=1.2 name=rockpro64:0 UUID=7e4bee7e:c759d5e9:3a7c3b4f:29675188
    ARRAY /dev/md/rockpro64:0_0 metadata=1.2 name=rockpro64:0 UUID=b1196f1a:4e1183fc:59f041f2:cc3e4aec

    mdadm --detail --scan --verbose
    ARRAY /dev/md/rockpro64:0 level=raid1 num-devices=2 metadata=1.2 name=rockpro64:0 UUID=7e4bee7e:c759d5e9:3a7c3b4f:29675188
    devices=/dev/sdb
    ARRAY /dev/md/rockpro64:0_0 level=raid0 num-devices=2 metadata=1.2 name=rockpro64:0 UUID=b1196f1a:4e1183fc:59f041f2:cc3e4aec
    devices=/dev/sdc,/dev/sdd



    Any help will be greatly appreciated!

    Hello everybody.


    I've set up a Domoticz Docker container but want to initialize my RFXCOM trnasceiver aside my Z-Wave USB controller.


    In the extra-arguments field, how am I supposed to enter both values? I have entered the value for Z-Wave controller successfully but how do I also enter the second entry for the RFXCOM?


    I tried:
    –device /dev/ttyACM0:/dev/ttyACM0 && --device /dev/ttyUSB0:/dev/ttyUSB0


    seperating those 2 with ‘’&&’’ or a ‘’:’’ or even a ‘’,’’ but nothing seems to work as the configuration / modification cannot be saved. ?(


    Many thanks

    not sure if the sticky bit is a problem


    try


    chmod 774 -R /sharedfolders/AppData/demoticz


    Which hardware are you using?

    It didn't work.
    I'm running OMV on a RockPro64 with its NAS case.
    A Unify controller in a docker container runs already on port 8080 with no problems so I guess Docker should be working fine on my OMV instance?
    Interestingly, through my Windows PC the folder AppData>Domoticz is NOT accessible while the folder AppData>Unify is.


    I will try the docker-compose option.
    Is that configuration going to create a folder itself or should I create a Domoticz folder manually prior to running the script?


    Thanks for all your help once again!




    UPDATE


    just ried the docker-compose option.
    Followed instructions up to docker-compose up -d


    but I'm getting the error:
    ERROR: Named volume "sharedfolder/AppData/domoticz:/config:rw" is used in service "domoticz" but no declaration was found in the volumes section.



    ..................................


    I did remove the Domoticz folder, recreated it, deleted it, left the image to create it by itself - pretty much tried all options.


    I will give the other image you suggested a go as soon as I get home.


    Do I have to run its configuration in a one-line-command as you suggested or should I save a docker-compose.yml like the previous example?


    Many thanks for all your help. It's been proven precious.
    At least I know I'm not doing anything wrong and the problem i with the image which is getting me nowhere!



    Just to update with this: This solution did not work either.
    Fed up with all this BS. Thank you very much for your effort and tendency to help though. It is thoroughly appreciated!



    Tried a few other containers with no luck.
    I just posted my issue to linuxserver.io forum. Let's see if they can pull a rabbit out of the hat....




    I posted my configurations logs there and waiting for their assistance.


    In the meantime, I managed to install a working image which was https://hub.docker.com/r/wbvreeuwijk/domoticz


    The problem is that -similarly to the Domoticz plugin- it does not seem to support OpenZWave USB as the my USB stick is not available in the drop-down menu of the hardware settings therefore, it's pretty much useless for my use case

    not sure if the sticky bit is a problem


    try


    chmod 774 -R /sharedfolders/AppData/demoticz


    Which hardware are you using?

    It didn't work.
    I'm running OMV on a RockPro64 with its NAS case.
    A Unify controller in a docker container runs already on port 8080 with no problems so I guess Docker should be working fine on my OMV instance?
    Interestingly, through my Windows PC the folder AppData>Domoticz is NOT accessible while the folder AppData>Unify is.


    I will try the docker-compose option.
    Is that configuration going to create a folder itself or should I create a Domoticz folder manually prior to running the script?


    Thanks for all your help once again!




    UPDATE


    just ried the docker-compose option.
    Followed instructions up to docker-compose up -d


    but I'm getting the error:
    ERROR: Named volume "sharedfolder/AppData/domoticz:/config:rw" is used in service "domoticz" but no declaration was found in the volumes section.



    ..................................


    I did remove the Domoticz folder, recreated it, deleted it, left the image to create it by itself - pretty much tried all options.


    I will give the other image you suggested a go as soon as I get home.


    Do I have to run its configuration in a one-line-command as you suggested or should I save a docker-compose.yml like the previous example?


    Many thanks for all your help. It's been proven precious.
    At least I know I'm not doing anything wrong and the problem i with the image which is getting me nowhere!



    Just to update with this: This solution did not work either.
    Fed up with all this BS. Thank you very much for your effort and tendency to help though. It is thoroughly appreciated!



    Tried a few other containers with no luck.
    I just posted my issue to linuxserver.io forum. Let's see if they can pull a rabbit out of the hat....

    not sure if the sticky bit is a problem


    try


    chmod 774 -R /sharedfolders/AppData/demoticz


    Which hardware are you using?

    It didn't work.
    I'm running OMV on a RockPro64 with its NAS case.
    A Unify controller in a docker container runs already on port 8080 with no problems so I guess Docker should be working fine on my OMV instance?
    Interestingly, through my Windows PC the folder AppData>Domoticz is NOT accessible while the folder AppData>Unify is.


    I will try the docker-compose option.
    Is that configuration going to create a folder itself or should I create a Domoticz folder manually prior to running the script?


    Thanks for all your help once again!




    UPDATE


    just ried the docker-compose option.
    Followed instructions up to docker-compose up -d


    but I'm getting the error:
    ERROR: Named volume "sharedfolder/AppData/domoticz:/config:rw" is used in service "domoticz" but no declaration was found in the volumes section.



    ..................................


    I did remove the Domoticz folder, recreated it, deleted it, left the image to create it by itself - pretty much tried all options.


    I will give the other image you suggested a go as soon as I get home.


    Do I have to run its configuration in a one-line-command as you suggested or should I save a docker-compose.yml like the previous example?


    Many thanks for all your help. It's been proven precious.
    At least I know I'm not doing anything wrong and the problem i with the image which is getting me nowhere!



    Just to update with this: This solution did not work either.
    Fed up with all this BS. Thank you very much for your effort and tendency to help though. It is thoroughly appreciated!

    not sure if the sticky bit is a problem


    try


    chmod 774 -R /sharedfolders/AppData/demoticz


    Which hardware are you using?

    It didn't work.
    I'm running OMV on a RockPro64 with its NAS case.
    A Unify controller in a docker container runs already on port 8080 with no problems so I guess Docker should be working fine on my OMV instance?
    Interestingly, through my Windows PC the folder AppData>Domoticz is NOT accessible while the folder AppData>Unify is.


    I will try the docker-compose option.
    Is that configuration going to create a folder itself or should I create a Domoticz folder manually prior to running the script?


    Thanks for all your help once again!




    UPDATE


    just ried the docker-compose option.
    Followed instructions up to docker-compose up -d


    but I'm getting the error:
    ERROR: Named volume "sharedfolder/AppData/domoticz:/config:rw" is used in service "domoticz" but no declaration was found in the volumes section.



    ..................................


    I did remove the Domoticz folder, recreated it, deleted it, left the image to create it by itself - pretty much tried all options.


    I will give the other image you suggested a go as soon as I get home.


    Do I have to run its configuration in a one-line-command as you suggested or should I save a docker-compose.yml like the previous example?


    Many thanks for all your help. It's been proven precious.
    At least I know I'm not doing anything wrong and the problem i with the image which is getting me nowhere!

    not sure if the sticky bit is a problem


    try


    chmod 774 -R /sharedfolders/AppData/demoticz


    Which hardware are you using?

    It didn't work.
    I'm running OMV on a RockPro64 with its NAS case.
    A Unify controller in a docker container runs already on port 8080 with no problems so I guess Docker should be working fine on my OMV instance?
    Interestingly, through my Windows PC the folder AppData>Domoticz is NOT accessible while the folder AppData>Unify is.


    I will try the docker-compose option.
    Is that configuration going to create a folder itself or should I create a Domoticz folder manually prior to running the script?


    Thanks for all your help once again!




    UPDATE


    just ried the docker-compose option.
    Followed instructions up to docker-compose up -d


    but I'm getting the error:
    ERROR: Named volume "sharedfolder/AppData/domoticz:/config:rw" is used in service "domoticz" but no declaration was found in the volumes section.

    not sure if the sticky bit is a problem


    try


    chmod 774 -R /sharedfolders/AppData/demoticz


    Which hardware are you using?

    It didn't work.
    I'm running OMV on a RockPro64 with its NAS case.
    A Unify controller in a docker container runs already on port 8080 with no problems so I guess Docker should be working fine on my OMV instance?
    Interestingly, through my Windows PC the folder AppData>Domoticz is NOT accessible while the folder AppData>Unify is.


    I will try the docker-compose option.
    Is that configuration going to create a folder itself or should I create a Domoticz folder manually prior to running the script?


    Thanks for all your help once again!

    from CLI


    ls -al /sharedfolders/AppData
    ls -al /sharedfolders/AppData/demoticz


    will show you owner and group and the permissions they have

    The outpu is as follows:


    ls -al /sharedfolders/AppData
    total 16
    drwxrwsrwx 4 root users 4096 Aug 6 01:14 .
    drwxr-xr-x 9 root root 4096 Aug 4 14:47 ..
    drwxrwSr-- 5 gtj users 4096 Aug 7 10:59 Domoticz
    drwxrwsrwx 5 gtj users 4096 Aug 5 02:17 Unify




    ls -al /sharedfolders/AppData/Domoticz
    total 428
    drwxrwSr-- 5 gtj users 4096 Aug 7 10:59 .
    drwxrwsrwx 4 root users 4096 Aug 6 01:14 ..
    -rwxrw-r-- 1 gtj users 136 Aug 7 05:00 domocookie.txt
    -rwxrw-r-- 1 gtj users 413696 Aug 6 02:19 domoticz.db
    drwxrwSr-- 2 gtj users 4096 Aug 6 02:18 keys
    drwxrwSr-- 2 gtj users 4096 Aug 6 02:18 plugins
    drwxrwSr-- 8 gtj users 4096 Aug 6 02:15 scripts



    Thank you!