Search Results

Search results 1-20 of 160.

  • System Disk Died - Help to Recover

    Sc0rp - - General

    Post

    Re, Quote from Ener: “but if you have an idea what im doing wrong, I'll be very thankful for a hint or a solution for my dilemma. ” I assume, since you have a (U)EFI partition, your old drive used a GPT scheme, instead of the old MBR sheme ... so the command dd if=/mnt/backup/grub_parts.dd of=/dev/sda bs=512 count=1 is wrong because GPT uses 34x512Byte ... Sc0rp

  • Possible to grow linear pool?

    Sc0rp - - RAID

    Post

    Re, may be the best strategy for you is not RAID0 ... did you hear about UnionFS (in case of OMV v3.x is it mergerfs)? I dunno why OMV can not "grow" a RAID0 - and i'm missing the screenshot of the "Physical disks" and the "file systems", but you cann grow your array via console as well. (SSH or local) Sc0rp

  • Raid 10 boot - how to?

    Sc0rp - - RAID

    Post

    Re, Quote from Lazurixx: “isnt it pretty normal to have high read and writes? ” No, that's not normal for system-drive ... in case of the writes. May be you have to less RAM, or you have misconfigured a "cache system" ... If you have lot's of writes on the SSD, you should use a pro-grade SSD anyway (like Samsung Pro series) ... but you should check, which configuration causes the high write-load, because that will destroy any SSD - and affects of course ALL members of an RAID1! (and, for RAID1, …

  • Re, you have to use the console, i think. - here you can work with "MC" (Midnight Commander, a Norton Commander Clone) - or "mv", the linux move-command Or you (install and) use tools like the CloudCommander ... e.g.in a docker-environment. Moving files between FS have to be always "physical" between the partitions ... Sc0rp

  • Execute fsck on all filesystem at reboot

    Sc0rp - - General

    Post

    Re, normally you can try (on console): touch /forcefsck This creates the empty file "forcefsck" in the root-directory - it will try to force an fsck on all FS listed in the fstab ... But i'm not sure, whether this works on ZFS/BTRFS too, XFS is included, as well es every EXT-version ... (on OMV 4.x with systemd i don't have a glue ... currently ... but i read, that there are kernel-boot-options for this) Sc0rp

  • Possible to grow linear pool?

    Sc0rp - - RAID

    Post

    Re, Quote from elastic: “I have the "Test" pool selected, but am trying to grow the "Home" pool which is linear. I'm aware stripe can't be grown ” Then you have to mark the other array ... but: - you can grow any raid-array made with md, even striped - your striped raid1-array is made of two fake-raid drives (dm = device-mapper) it seems - so you can alter this array only in the fake-raid-environment (bios of the controller), md "reads" this only ... Which physical disk you wanna add to "Home"? …

  • Raid 10 boot - how to?

    Sc0rp - - RAID

    Post

    Re, why would you make a RAID10-bootdisk? Setting up RAID-devices as boot-devices is more difficult, cause you have to add the raid-modules to the initramfs ... but if you have already a SSD as a boot-device, it's not worth the time ... And you have to do this 1st ... bevor installing OMV, so you have to use the Debian-NetInstall-ISO and after all is finished, install OMV over that. For making a RAID1 as boot-device, just google arround ... (e.g. edoceo.com/howto/mdadm-raid1) Sc0rp

  • Re, Quote from gderf: “The problem with using dd for this is that when cloning a smaller source drive to a larger target drive, the size of the partiton on the target drive will be the same size as on the source drive. ” Correct, but you can alter that later ... or you can use the additional space for a new or other partition ... Sc0rp

  • RAID5 with unexplained error

    Sc0rp - - RAID

    Post

    Re, Quote from danieliod: “Will RAIDZ1 can also cope with such issue? ” More or less, "cutting" the power off destroys any information held in caches ... hdd-cache & RAM, which affects any FS. ZFS brings the best algorithms to deal with that, but unexpected powerloss should be avoided at all! Sc0rp

  • Power loss and system drive will not boot

    Sc0rp - - RAID

    Post

    Re, *GRATS* ... you made it! Quote from linndoug: “Drive SDB still looks the same. ” Yeah, that's the bug which hits often the md-raid-arrays and degrading them ... you have to issue this command more often to get rid of the wrong part-info ... Sc0rp

  • Re, Quote from jfromeo: “Thank you both, I will do it via shell and let you know ” Cloning via shell is done with "dd" - but of course you need and free SATA-port for the new SSD ... Sc0rp

  • Power loss and system drive will not boot

    Sc0rp - - RAID

    Post

    Re, on your second "blkid" is SDE avail. but SDB is missing ??? What the hell is going on on your box? fdisk shows SDB ... very curios. Quote from linndoug: “How best do I recover this raid and restore my system? ” What U can do: - assemble the remaining 3 drives for a "degraded array" -> and backup your data (read-only mode preferred) mdadm --assemble --readonly /dev/md0 /dev/sdc /dev/sde /dev/sdf (change md0 to md127 if ya want, may be readonly won't work) You can then escalate the commando: m…

  • Re, Quote from subzero79: “Current omv uses label as first option for device mounting and mount path generation. ” Right, therefore: Quote from Sc0rp: “Just take a look via console (SSH) in your /srv directory: ls -la /srv ” ... to check what is used ... Quote from flvinny521: “I assume that the drive under /dev/sda is the one that the kernel sees as "port 1," and so on? ” Right ... but only for older systems. Currently the fstab uses the UUID for mounting, which is more failproof. As @subzero79…

  • Re, Quote from flvinny521: “I label the drives (okay, partitions) according to their position in the hot-swap bay. ” Understood, but be aware of the fact, that this naming structure is not bound to a "bay" - it is bound to the (SATA-) ports on your mainboard, and which of them is recognized (or coded in) first from BIOS/UEFI, and then found by the kernel (kernel module related) ... so, if your BIOS/UEFI changes the order, or the kernel-module does it, it will not function anymore. Therefore most…

  • Replace failing disk on ZFS pool?

    Sc0rp - - General

    Post

    Re, @tkaiser: i'm not a ZFS crack ... but why shows the ZPOOL 10.9T? For me it is a 4x3TB ZFS-Z1 - how did ZFS calculate the parity? (should it not be someting arround 9T or less for TiB?) Sc0rp

  • Re, Quote from flvinny521: “The drive labels of the old drives are the location of the drives in the bays, so I want the new drives to use the same labels. ” I'm struggleing araound, what "labels" do you mean, and why you need to bind it to "bays"? Drives uses serial-numbers for indentification (which is unique), and partitions uses UUIDs (which are also unique), the only "static" part can be the root-filesystem ... with the mountpoints under /srv (there you can have a directory named "bay1" or …

  • Re, Quote from tranz: “now I see unused file system labeled from raid 5. ” Where did ya see that? That's not in your screenshots ... Please enable SSH on your NAS and copy the outputs as text int he posts, this will make analysys a lot easier ... Here are the important commands for the md-RAID: cat /proc/mdstat mdadm -D /dev/md0 as well as a complete (not trunkated) output of: blkid lsblk Btw.: the error shown on your screenshots is not RAID-related, it is FS related ... may be you had a powerlo…

  • RAID5 with unexplained error

    Sc0rp - - RAID

    Post

    Hi, beginning from the start-post if have a question: How did you use your box? 24/7? Powering down the box anyway? The errors are not raid related - only FS related. And this seems that you (sometimes?) hard cut the power off your system, or something similar. Hint: fsck needs often more runs ... Quote from danieliod: “And what is the recommendation for the 3X3TB drives I have? I need at least 6 GB for storage. ” - Backup, Backup and Backup ... then the redundancy is optional - RAID5 (even it i…

  • Power loss and system drive will not boot

    Sc0rp - - RAID

    Post

    Re, Quote from linndoug: “It appears that drive SDE is part of SDB. ” Where did ya see that? That can not be or happen ... under linux lsblk shows your setup correct (as the kernel detects it) - and here is also SDE completely missing. Just check your cableing (inkl. power) of this drive - since sdb shows wrong partioning, sde is your only hope to get your array back ... (on R5 you need 3 of the 4 drives to recover). Sc0rp