Posts by amikot

    I'm trying to install composer plugin, every time I'm getting error:

    ** CONNECTION LOST **


    After connecting back to Workbench, I see all composer plugin settings, but If I try to set Compose folder (to prepared shared folder location) I'm getting error:

    Code
    500 - Internal Server Error
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color compose 2>&1' with exit code '1': debian: ---------- ID: docker_install_packages Function: pkg.installed Result: False Comment: Problem encountered installing package(s). Additional info follows: errors: - Running scope as unit: run-reb0218a7fcd8460586f8c8854cf06954.scope E: Package 'docker-ce' has no installation candidate Started: 23:29:02.666812 Duration: 7461.964 ms Changes: ---------- ID: docker_compose_install_packages Function: pkg.installed Result: False Comment: Problem encountered installing package(s). Additional info follows: errors: - Running scope as unit: run-r0633ed45cba145e8b597f1645a3ac6eb.scope E: Package 'docker-compose-plugin' has no installation candidate E: Unable to locate package containerd.io E: Couldn't find any package by glob 'containerd.io' E: Couldn't find any package by regex 'containerd.io' E: Package 'docker-ce-cli' has no installation candidate E: Unable to locate package docker-buildx-plugin Started: 23:29:10.129572 Duration: 927.888 ms Changes: ---------- ID: docker_purged_package...


    I've tried that on omv6.9 and on 6.10 (just installed the update) - exactly the same result.

    I would like to use Docker, so if someone have any ideas, what to do etc.
    Thanks :)

    When I try to install ZFS plugin, I'm getting error.

    Version
    6.8.0-1 (Shaitan)

    Kernel

    Linux 6.1.0-0.deb11.7-amd64

    Code
    500 - Internal Server Error
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; export DEBIAN_FRONTEND=noninteractive; apt-get --yes --allow-downgrades --allow-change-held-packages --fix-missing --allow-unauthenticated --reinstall install openmediavault-zfs 2>&1' with exit code '100': Reading package lists... Building dependency tree... Reading state information... Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: linux-headers-amd64 : Depends: linux-headers-6.1.0-0.deb11.9-amd64 (= 6.1.27-1~bpo11+1) but it is not installable E: Unable to correct problems, you have held broken packages.

    Single backup might not but borgbackup makes more than one backup. So, it is very reliable.

    zfs can't protect against corruption any better than snapraid. rsync would give you one backup. Borgbackup could give you many backups at not much more space because it dedupes and compresses.


    I think you need to research why you think zfs is better and snapraid is only ok. Read the snapraid comparison which compares snapraid to zfs and other raid options. https://www.snapraid.it/compare In particular, look at the "Fix silent errors" section since you seem to worry most about it.


    My NAS is old laptop with two SATA ports - I could maybe connect something via USB2.0, but I don't want to.
    My backup is held in the drawer on six HDDs while in the NAS I have two SSDs.

    In fact my NAS has two roles:
    1. It is home media server (including photos and videos from the past) - these data are not on any other PC, so I'm keeping these data on those six HDDs
    2. It is, a backup and swap-hub for family laptops/PCs/phones - these data don't need to be protected as they are having copies on the laptops.

    That's why I don't care too much about having constant backup agent running, but I would like to avoid silent corruption.
    Specially my tens of thousands of family photos from past 25 years would be stupid to loose. To be honest, I think that some of them may be corrupted already.
    It is good to have backup of everything, but I can't imagine checking thousands of images and restoring broken ones from the backup. That's why I would prefer some automation here.

    Snapraid calculates parity to detect changes. If the modified time of the file hasn't changed but a bit is flipped, then corruption has occurred. But snapraid isn't different than raid which means it is for availability and bitrot protection not a replacement for backup. I recommend a backup method that can create multiple backups and protect against bitrot like borgbackup.


    I have backup of everything, but since backup has no bitrot protection it's not reliable too.
    In my situation silent corruption is biggest danger - that's why I'm considering to convert to my disk to ZFS in RAID1 or ZFS with a rsync backup on second drive which also would be ZFS. RAID1 would give me availability, rsync would give a bit better protection.

    I'm considering ZFS, but if you sure that data protection in snapraid is Okay - I'm not sure now.
    Another thing, I don't know the cost and risk of using ZFS too.

    gderf Thanks for reply, however the links didn't answer for all my questions.

    Okay - I've found what diff script is and what it does, but still didn't find information how it protects the data and if it really does.

    As I understand diff script (or maybe should rather say SnapRAID) can't recognise if the file was deleted/modified by user or by cosmic radiation (or whatever nasty thing), and is using threshold factor to guess what actually happened..

    This is a bit ridiculous, because actually this don't protect against hidden corruption at all. Simple singular bit swap can be assumed as done by user, and omitted by the script.

    That kind of protection when you believe you have data protected, but you don't is not only weak protection, it is danger lie.

    Or maybe I'm wrong - correct me, please

    Thanks

    Hi,
    I have simple, or maybe rather poor SnapRAID setup of 2 x 4TB SSD's + 64GB system USB stick.
    One 4TB disk is for Data.
    Second 4TB is for Parity
    Content files are kept on Data and System USB stick.

    I've set Scheduled Diff with default settings - every Sunday 2:30 am.

    Everything seams to be as in the guides, but I don't understand one thing:
    This weekly automation is for what? for syncing or for scrubbing?
    If after setting everything up, after initial hashing and syncing I've deleted some data or created or changed some files, do I need to wait until Sunday to get everything synced?
    Okay, I know I can sync manually, but shouldn't sync be scheduled for more often runs - I don't know - daily, hourly etc.? How to set this up? With normal crontab?

    But maybe I don't understand the logic how snapRAID works? To be honest, I'm not sure how snapRAID will check for data corruption if before scrubbing it must be synced?
    Won't sync copy corruption from data to the parity disc? If so, how it will be repaired if both are corrupted?
    But on the other hand, if I won't sync, scrub will show loads of errors of anything has changed - like when I've deleted something or modified.

    Another unknown thing are exclusions - personally I think there is a bug in the plugin.
    When I'm adding new exclusion, the file browser asks me to give absolute path (I start selection in root directory). But if I select this way - snapRAID complains the path is wrong.
    So I had to set relative path for datadisk only. This worked - or rather snapRAID didn't complain this time, but how do I know it worked? How can I check that excluded directory is really excluded?
    Specially if I've excluded this directory after initial hashing and syncing.

    Thanks in advance :)

    A

    Thanks @Louie1961 and chente for your replies.
    You're right - Next Cloud isn't too heavy in terms of resources consumption, but using it just to access videos from my NAS is just using one of many features.
    Specially that I would like to access these files with specific app - not just website embedded player, that's why I prefer VPN.
    In LAN I use android version of VLC - which works perfectly with samba and my NAS folders are displayed perfectly with thumbnails.
    I don't want to share anything to 3rd party people, so I don't need easy links - just something that I could install on my or family members phones.

    chente, your idea with wireguard plugin interested me - if I register DDNS on my router and redirect the port (which one?) - what software need to use on the android devices?

    As I've said I did try to use zerotier-one on android and current version is broken (many users states that). But when I install older version which other people declare it works - in my case there is still no luck - VLC can't discover any shares from that network, even if PING works.
    On another hand, Network scanner app loosing contact with most of the normal LAN when zerotier is active.
    And actually I've tested that also with Tailscale - zerotier alternative, and had identical issue.
    So maybe my Samsung with Android 12 (or whatever) has that issue, but I'm really sure the there is something wrong with that.

    Thanks

    Well, maybe Nextcloud would resolve the problem, but it would be silly to install it just for that feature.
    I don't want to install anything so big - I would rather run OpenVPN server and client app on the phone.
    I'm just not sure if Android can work with VPN for any other purpose then just redirecting internet traffic.
    There is some conflict between LAN and VPN-LAN .... Android can't live with two LAN networks I think.

    Hi,

    I wonder what solution is the best to access shares over the internet (i.e. from mobile phone)?
    On Synology I had QuickConnect, but since I sold my old DSM and moved to DIY solution I have, a problem with this,
    Currently I use Zerotier-One which creates Virtual LAN, so access through it is similar as with real LAN,
    Would be great, but Android app seams to be broken, so I can't access the server anyway.
    Other thing is that zerotier app also blocks local LAN, so if it's running I can't access any device of my local LAN - this was happening even when it worked few months ago.

    Any ideas ?

    Thanks

    Okay, I think I'm getting into somewhere.

    It looks the installation media has EFI partition and HP automatically starts using UEFI mode which causes the installer to enter UEFI mode too. In that mode installer cannot install legacy boot loader. In simple words EFI or nothing.

    But I can boot the same USB stick in legacy mode, this gave me option to install legacy version of grub2.

    And I have it done.
    So the problem was caused not by HP, but by weak install script that could not manage legacy grub installation.



    Few words about my dead USB drives - I think they're died because I've used guided partitioning mode.

    In guided mode, part of the drive is allocated as SWAP partition.
    I have loads of RAM, but after few weeks up time, some swap is being used anyway.
    I think this created bad sectors, and killed the USB drives.

    So this is not only question to protect system partition, but whole drive where are also other partitions.

    Thanks for help, but something is not right still.

    The guide suggests to not use UEFI, so I switched it off in the BIOS settigs and started installation. But installer wants to install EFI partition anyway, if I partition disk on my own deleting swap and EFI - it can't install grub and whole installation fails.

    The guide you linked is telling me to install Bullseye - which is out of date - can I use current stable version of Debian?
    I don't even know where to look for that old one, Debian website is so messy.

    Did you have docker installed and use /var/lib/docker by any chance?


    HP doesn't do Linux users any favors when installing Linux. Using a desktop as a server can also present challenges. Not much OMV can do to fix this.

    No, I didn't use any docker.

    I think actually that it's Debian's fault. I was dealing with many HP PCs, Servers, Laptops and never had issues with installation but maybe with Debian and OMV.


    In terms of dead USB sticks - I did use flashmemory plugin with first usb stick that died. with second I can't remeber because it was fast installation as I needed get server up ASAP.

    In terms of current installation - I will try to use Debian netinst as you suggests, but really - this is a bit disappointing.

    I'm linux user since I've sold my Amiga in 2004 and maybe I had some installation issues on beginning, but this what's happening here is really weird.

    I'm just trying to install OMV6 on my old HP Slimeline 411-a0005ns and can't get it working.

    I've downloaded "Stable" release (6.0.24) image, and wrote it on Sandisk Cruiser (16GB USB3.0).

    For installation as system disk I've bought brand new SanDisk 64GB stick (USB 3.2).

    Both media inserted into USB 3.0 ports on the back of PC.

    For installation - no any hardrive connected yet - that will be done after OMV will be installed.


    First attempt:

    HP slimline has UEFI enabled by default, so I left it on.
    During installation, installer informed me that there is more than one drives in the PC (yes - there was one fresh USB stick and installation media), it also asked me if I want to write EFI partition which may be in conflict with existing operating system. I was a bit confused because there was no any operation system installed there, so answered NO - and after installation couldn't boot.

    Second attempt:

    I've done exactly the same, but answered yes - to install EFI partition. Unfortunately installer stuck on download some additions. So reboot.

    Third attempt:

    Installation seamed to be passed, but no system disk after installation.

    Forth attempt:

    Decided to switch UEFI off - installed again - installation passed - reboot - no system disk. I go to boot menu and there is USB stick visible, but no grub on it.


    Conclusion:
    Something is very wrong here. I remember I had similar issues with other hardware HP Microserver, but then finally I got it installed.
    It looks that installer regardless media selected for installation is always having issue writing grub/efi. I don't know if it's not writing it on the installation media instead the destination drive.
    I think it may be the issue because after second attempt, installer changed resolution to low-res, before it was starting in higher resolution.

    Does anyone have any idea what to do?

    To be honest I'm a bit confused - don't know if I want to use OMV anymore as Microserver it crashed twice with completely dead USB drive - just few months after installation.
    Before Microserver worked for 2 years with XPnology and I just changed to OMV as was expecting it be more reliable as native OS with better support. Not like XPnology which is a hacked version of Synology SDM.
    Now I'm simply not sure - what the hell is going on here?

    Thanks for all the answers.

    In total I had problems with USB sticks 3 times.
    First time it was SanDisk 128GB USB3.1 (don't remember the name) that died after few months serving as "opt" disk inside my router.
    But that is not related to OMV.

    Second time it was also Sandisk - cruiser 128gb - it died after few months running as system disk for OMV6 installation.
    I didn't investigate what happened - It had flashmemory plugin installed.

    Third time is now - this time don;t know the usb stick brand. Didn't bother after experiences with SanDisk .
    I don't remember for sure, but I think there was flashmemory installed.
    When it crashed I didn't take any screenshot, didn't have time to check if USB stick is dead or overfilled or what.
    Only what is suggesting it may be overfilled was one message I remember - it said that cant write to swap because it's full.

    At work I'm running ubuntu server and what I've noticed - every few months I have to purge old logfiles because there are gigabytes of data recorded. But that is webserver - so there is apache log, access log etc.
    Here we have samba server which mostly is doing nothing.
    Why 128gb or 64 or even 32 gb wouldn't be enough ?

    Previously (before I've migrated to OMV) I was using synology NAS and Xpenology - and it never had any problems. In terms of of synology - okay it don't use USB, but Xpenology starts from the stick as well.
    Dont take me wrong, I don't complain, just would like to understand, how to set OMV up to be sure it won't surprise me again.

    USB stick was working as system disk.

    Data are stored on 4 HDDs and there is no issues with it.

    Installation had only very simple features enabled - just samba and sftp.

    And actually whole nas was used not often.

    Today I've noticed it's not responding. On screen there were waniings about errors found and no space on the swap drive,

    After reboot just Grub word in the corner, but no anything happening.

    This happened for second time on two different USB sticks.

    So I'm starting to be worried if OMV is safe reliable. How it is, that failures like that are occurring just like that ?