Search Results

Search results 1-20 of 1,000. There are more results available, please enhance your search parameters.

  • Quote from macom: “I also have to mention that I recently did a rollback with snapper on the HC2 ” Yep, on the OMV images for ARM (except RPi and two or three other boards where I had bootloader troubles with btrfs) rollbacks are possible since we ship with btrfs for the rootfs now for a long time. But I'm almost always talking about btrfs for data shares and here it really gets interesting with OMV5 in the future since not 'everything outdated as hell' like today (Debian 10 relies on kernel 4.1…

  • Quote from macom: “Just want to mention, that snapper is doing this ” Yep. But snapper only does snapshots and as such only prevents from logical failures (like accidentally deleting/overwriting data or when used on the rootfs after failing software upgrades allowing to revert to last known working state -- this was the problem @dkxls ran into years ago on OpenSuse and the challenges are outlined here). If you want to use snapshots also for 'real' backup protecting from physical failures then wi…

  • cockpit on RPi

    tkaiser - - General

    Post

    Quote from NasAxMan: “W: Target Packages (main/binary-armhf/Packages) is configured multiple times in /etc/apt/sources.list:7 and /etc/apt/sources.list.d/openmediavault-kernel-backports.list:1 ” This warning is the usual result of activating the backports repo an ARM where it's rather useless anyway. See Problem with omvextra repositories Looks like Cockpit relies on thermald while it could use sysfs to query the temperature. More info and maybe place to discuss: github.com/cockpit-project/cockp…

  • Related: Improvement: Make unmounted /srv folders not writable by default

  • Quote from dkxls: “I agree, my reservations for Btrfs are mostly based an FUD. Not the best way to make such decisions, but I can’t dismiss them straightaway either ” We're all affected by this and I would guess the older the more forum.openmediavault.org/index…769fc69e972addd5e30a768a8 If things go wrong it's kind of a normal reaction to blame stuff that is new to you or unknown or not fully understood yet. Then instead of a failing/weak PSU dropping disks out of ZFS pools OMV's proxmox kernel …

  • Quote from Goobs: “Root directory, "/", is at 100% capacity ” You need to umount all your /srv shares and then check the contents of each directory as outlined here: OMV4 - Systemplatte (120GB SSD) zu 94% voll? (Google translate version)

  • Quote from daveinfla: “It took 2 hours to write a 32GB USB Thumb drive with Win32DiskImager, is there anything faster? ” Why do you think the software would be to blame? Which write speeds did h2testw report when you tested the drives for fake capacity?

  • My Closet Monster

    tkaiser - - My NAS Build

    Post

    Quote from reverendspam: “I tried initially using ZFS on OMV, but zfs on the proxmox kernel was a bit buggy for me and not quite the same as on BSD. The freenas ZFS pool imported fine, the pool just would not stay connected ” Which was related to your power supply, right?

  • Quote from dkxls: “btrfs. I have used it around 6 years ago ... OpenSUSE ... the snapshots btrfs automatically did and eventually ran out of disk space in /var ” Btrfs doesn't do snapshots on its own. I would believe you're talking about OpenSuse's snapper instead? Anyway: there's no need to create constant snapshots (if you do snapshots then choose a tool that automates everything for you and deletes unneeded snapshots based on a retention policy -- I already mentioned btrbk above). And experie…

  • Couldn't log in to the web interface

    tkaiser - - General

    Post

    Quote from johnfc2019: “Once the rsyslog had restarted, login was fine ” Which platform do you use? x64? ARM?

  • Quote from TrickyTrix: “mdmonitor.service: Main process exited, code=exited, status=1/FAILURE ” You should check this. And as already written I would strongly recommend to run a forced fsck on your RAID1 --> fsck -f Source Code (1 line)

  • Helios - HC2 - Or Microserver?

    tkaiser - - My NAS Build

    Post

    Quote from Adoby: “If you install Emby and don't use docker, Emby will by default be placed in /var/lib/emby on your rootfs, along with metadata folders and configuration. That is very bad if your rootfs is on a SD card ” Just adding: If the disk on the HC1/HC2 should not spin down anyway, it's always an idea to partition the disk for one appropriately sized rootfs partition and another data partition and then use nand-sata-install to move the rootfs to the disk.

  • Helios - HC2 - Or Microserver?

    tkaiser - - My NAS Build

    Post

    Quote from denny2k2: “why people favour Emby over Plex in these situations? ” In this situation since it allows to use HW acceleration for video. But you need to keep in mind that the ARM SoC on the HC2 is somewhat old and as such doesn't support most recent codecs. But same situation on x86 as well. The QuickSync implementation of older Intel CPUs also only support older codecs.

  • Quote from Adoby: “Modern hdd/ssd firmware already correct many errors and ECC memory can also help ” But this is more or less unrelated to the 'bit rot' problem since ECC used inside storage entities or ECC RAM only try to ensure that a 1 is a 1 and a 0 is a 0 now. Data degradation on (offline) storage is not addressed here at all. IMO you should really have a look at btrfs. Set up 2 HC2 with btrfs, use one for the productive data and the other for backups. Then use btrbk to do the backup thing…

  • Quote from tkaiser: “With a Helios4, 4 SATA disks and data integrity + 'self healing' in mind I would most probably choose a btrfs raid1 with 2 to 4 disks ” Disclaimer: I personally try to keep storage pools as small as possible (maybe because I deal with storage for a living and am constantly confronted with the downsides of really large storage setups like an fsck or RAID rebuild taking days or even weeks and such stuff). With btrfs and 32-bit systems like the Helios4 there seems to be a 8TiB …

  • Quote from TrickyTrix: “two WD Red 4 TB drives in OMV RAID 1 config ... PicoPSU ” Also called 'recipe for disaster' If you want to play RAID you need reliable hardware, especially powering is essential. You could try to run quotacheck manually to see if the problem persists and I would immediately also run a forced fsck on your RAID1 (of course you might suffer from silent data corruption even if the filesystem isn't corrupted or can be repaired)

  • Missing space on boot drive...

    tkaiser - - General

    Post

    Quote from NasAxMan: “so you suggest to move the docker-path to an external hdd anyway? ” I'm not suggesting anything when I'm not familiar with the issue (like Docker + OMV) All I can tell is that low-end flash memory like SD cards or USB pen drives dies pretty quickly with a lot of write accesses with high Write Amplification involved (as it's at least the case with RPi-Monitor).

  • Helios - HC2 - Or Microserver?

    tkaiser - - My NAS Build

    Post

    Quote from calexm: “Emby works on port 8096 by default, so there wasn't any conflict with OMV Web UI and Emby Web UI at all ” Thanks for clarifying. Might be worth a quick tutorial with some uploaded screenshots outlining how a cheap HC1 or HC2 can be used for NAS + Emby server with HW accelerated video transcoding. It's sad that most OMV users believe transcoding would need a huge Xeon box and 100W wasted for CPU cores being at 100% when a small ARM thingy can do the same at below 10W barely us…

  • You can ignore the message that no interfaces are available (it's a timing issue). Simply check your router which DHCP address has been assigned or try to access rockpro64 directly. Or log in at the console and type ip a to get the IP address the RockPro64 has been assigned by DHCP. But you should trash your 4GB card and get a good and genuine SanDisk A1 card with at least 16GB. Those old cards perform poorly especially with random IO.

  • Quote from dkxls: “So, what I am not yet completely understanding is why I should use ECC RAM ” Since bit flips happen. We have a lot of servers in our monitoring and some of those with ECC RAM report correctable single bit errors from time to time -- even those that survived 72 hours memtest burn-in. Those without ECC ram have no ability to report bit flips and as such we can only speculate what happens there (if an application crashed for example). The consequences of a bit flip range from 'no…