Posts by Sc0rp

    Re,


    the only solution i can give, is to end the VPN-tunnel on your router instead of your nas-box. This may have performance issues on high bandwith wan-connections (most router-SoC's have poor performance doing vpn), but that it will work with your complaints ...


    Btw. ending the VPN-tunnel on your NAS-box turns it into a full featured router-box in addition to the rest of services, which will highly increase the security implications.


    Sc0rp

    Re,

    Nun wollte ich SABNZBD als http_proxy laufen lassen, sodass ich ohne Port aufs Webinterface drauf zugreifen kann.

    Wie hast du das gemacht/versucht?


    Doch leider gab es wohl einen Fehler und ich kann weder auf OMV noch auf das SABNZBD Interface drauf zugreifen. Kann mir wer helfen?

    omv-firstaid ... stellt i.d.R. das Netzwerk wieder her, sodass der Zugriff auf die OMV-WebGUI wieder gegeben ist.


    Sry, no time 4 english translation ...


    Sc0rp

    Re,

    Why?

    Because it is fully unclear, which IP-adresse are on the network-cards at the omv-box configured, so i decide to use the "multitool" ... you can masquerade 192-168.178.0/24 fully behind 192.168.20.0/24 without altering any client ... for example.


    Your solution assumes, that 192.168.178.1 is bound to the 2nd NIC of the omv-box ... i think ... but i'm fresh back from vacation.


    Sc0rp

    Re,


    in the omv-plugins you can find the plugin for altering the krt ... it is called "openmediavault-route" (current v3.1.4 and not from the omv-extras repo).


    Afaik "policy routing" is not needed, just google for "linux iptables forwarding" (and some additions).


    Sc0rp

    Re,

    I had a RAID1 for some years, but realized then that I do not need availability.. What I want is to have my data secure. Problem with RAID1 is, that if you delete a file by accident, it will be shortly after deleted on the both disks. Same if malware is encrypting your samba share.

    Addition: "it will be shortly after immediately deleted on the both disks" - and RAID1 does not provide any integrity (checksum or XOR)!


    RAID1 is not recommended for data. Use ZFS, BTRFS or an one data-disk/one rsync-disk ... and of course a (external) backup!


    Sc0rp

    Re,


    ja, hin und wieder mache ich auch mal Urlaub ... Ungeduld is schon mal überhaupt nix für ein RAID-Setup :P.


    Aber wie dem auch sei - dein Pratition-Layout sieht total chaotisch aus (ich habe das mal nach Platte geordnet):


    blkid
    /dev/sda1: UUID="205fa2e9-5e4a-4040-bcde-9375b7c1e496" TYPE="ext4" PARTUUID="d01c17b6-01"
    /dev/sda5: UUID="6461da0d-1ec1-48e4-bea7-8564c674c904" TYPE="swap" PARTUUID="d01c17b6-05"


    /dev/sdb1: UUID="87a170c0-9254-4aef-93ab-1243a3f1b917" SEC_TYPE="ext2" TYPE="ext3" PARTUUID="000e4123-01"
    /dev/sdb2: UUID="a38e1d0e-6f31-47ba-8c95-a15360cabdd7" TYPE="swap" PARTUUID="000e4123-02"
    /dev/sdb3: UUID="247ebe0f-b285-b0ea-72b3-584665ba4b27" TYPE="linux_raid_member" PARTUUID="000e4123-03"
    /dev/sdb4: UUID="797b32b1-ff92-48f7-96d5-1c289ef4d58d" SEC_TYPE="ext2" TYPE="ext3" PARTUUID="000e4123-04"


    /dev/sdc1: UUID="87a170c0-9254-4aef-93ab-1243a3f1b917" SEC_TYPE="ext2" TYPE="ext3" PARTUUID="0000c25a-01"
    /dev/sdc2: UUID="a38e1d0e-6f31-47ba-8c95-a15360cabdd7" TYPE="swap" PARTUUID="0000c25a-02"
    /dev/sdc3: PARTUUID="0000c25a-03"
    /dev/sdc4: UUID="797b32b1-ff92-48f7-96d5-1c289ef4d58d" SEC_TYPE="ext2" TYPE="ext3" PARTUUID="0000c25a-04"


    /dev/sdd1: UUID="87a170c0-9254-4aef-93ab-1243a3f1b917" SEC_TYPE="ext2" TYPE="ext3" PARTUUID="000938ae-01"
    /dev/sdd2: PARTUUID="000938ae-02"
    /dev/sdd3: UUID="25c82dbd-52a3-4ddd-909e-086087fdf1d3" EXT_JOURNAL="00000500-0000-0000-0000-00000020c51d" TYPE="ext4" PARTUUID="000938ae-03"
    /dev/sdd4: PARTUUID="000938ae-04"


    /dev/sde1: PARTUUID="000e273f-01"
    /dev/sde2: UUID="e54a1cc4-6baa-4487-b957-02e207c66c38" TYPE="swap" PARTUUID="000e273f-02"
    /dev/sde3: PARTUUID="000e273f-03"
    /dev/sde4: PARTUUID="000e273f-04"


    -> ich sehe hier nur EINEN raid-member ...
    -> jede Menge "kurze" PARTUUID's ... hast du das raid i'wie unter einem Fake-RAID oder/und unter Windows (NTFS?) gebaut?


    Welches System setzt du ein?


    Sc0rp

    Re,


    there is something seriously wrong with your "4 (four!) drives", cause:
    - blkid shows only 2 (two!) autodetected raid-members
    - and fdisk shows up 3 (three!) 4TB-drives, but one of them with a GTP ...


    Is there any path for recovery of my raid?

    Sry, doesn't seems so - cause of missing one drive for recovering ... sde is completly missing at all! (if you can manage to reconnect this drive to your box, may be you'll get a better chance). Drive sdb seems to be damaged ...


    Sc0rp


    EDIT: close your other thread pls! (and copy your post)

    Hi,

    Did you have a look at this one?

    Yeah ... after the vacation ... "of course" :D


    I tried to see with gparted what was happening, and I saw that both the 6TB hard drive didn't even have a partion, it was like I never created and ext4 RAID1.

    That's normal, because OMV uses the blockdevice option of md ...


    But what happend to your RAID1 is unclear:
    - blkid shows your (correct) members
    - "cat /proc/mdstat" shows the raid-array running
    - "mdadm --detail --scan --verbose" show also correct infos as
    - "cat /etc/mdadm/mdadm.conf" looks also normal


    Questions:
    - current status?
    - which OMV-version do U use?


    Analysis (so far):
    It seems that your RAID1 got struck by two well-known bugs:
    - Debian related bug auf changing md0 to md127 (caused by naming the array)
    (this should be the minor error, since autodetect should do the trick ... automaticly)
    - and (possible) Bootsect-Bug of the harddrives, which made the "superblock" disappear after reboot
    (unconfirmed)
    - and fdisk readings are wired, try sfdisk instead ... the drives should show up as 5,7TiB-drives, not as 3,7 TiB!


    Hints:
    (especially for @betupet)
    - RAID1 is one of the badest raid-level (right after 0 :P) - here i agree with @tkaiser (one drive-config with rsync to the second drive instead, or ZFS)
    - never use RAID-setups with PicoPSU and/or USB/SATA-adapters, that includes also every raid-setup on RPi and other single-board computers


    Sc0rp

    Hi,

    In the following folder /etc/mdadm/mdadm.conf I cannot see the configured harddrives.

    Yeah, since you use "superblock auto-detect" by default, there is nothing written down ...



    mdadm: /dev/sdf has no superblock - assembly aborted

    Seems to be no raid-member, try
    blkid first ...


    Sc0rp

    Re,

    Can I just try the other commands like --run or --run --force without destroying something?

    You can try, but it will fail too - i wouldn't recommend it, because your array is not in a "sync, but not startet" state ...


    What gaves:
    cat /proc/mdstat
    mdadm -D /dev/mdX (take the number (X) from above output)?


    I figured out it was the power supply (external power supply with PicoPSU).

    Uhm, didn't read that part ... thanks to @tkaiser pointing me at that.


    REALLY? PicoPSU? Which one in detail? What was the power source (primary side)?


    Sc0rp

    Re,

    Huh? How's that?

    Put the switch either in "slave" mode, or use a switch, which provides more options regarding "frame distribution" and on the linux-box do some "magic" with the "transmit hash policy to use for slave selection" ... article for algorythm's: reference (german).


    It is not easy, need's lot of knowledge and ... of course ... it will work on 1to1 links too (even server to server). We tried it only once for a customer, cause he was totally sure about gaining 800MiB/s+ on 8x1GbE-Bond ... i think you can imagin, how was that failin' ...
    Another aproach is using VLANs (without LACP),it's tricky too, but separates the clients same as the other hashes i think (never tried)



    But best alternative is 10GbE ... when the switches will become payable for home users ... i think.


    For bonding/link aggregation you'll need simply more clients, may be virtualization can be an aproach?


    (Personally i gave up using bond's in my home-network, even you can mix up fiber and copper-links - for now i just simple job my transfers to the night at single 1GbE-Links, if i got data-masses ...)


    Sc0rp

    Re,

    I deal with (md)RAID now for over 20 years and I know exactly why I avoid it if possible.

    Same time but different approach ... i use RAID in Hardware and Software whereever it is needed for availability and/or business continuity, tuning it for max data safety ... with working and tested backup strategy, ups and filesystems, which provide a working integrity level. Most people out there underestimate the complexity of ZFS as well! RAID seems to be "easy" ...


    But what's the alternative? Using ZFS? Unfinished BTRFS? Buy an appliance? Looking at the mass of prebuilt NAS boxes (QNAP, Synology, Thecus, Asustor, ...), it seems, that even software RAID is doing well for home users - read it again: HOME users. I think, it is an easy approach on "blocklayer concatinating" ... if you look around the other aproaches. Sure, it' has it's disadvatages and some "obstacles" you should avoid, but hammering "bah, don't use old fashioned RAID crap" on users won't work, because of the lack of alternatives ... and you give no hints on other technologies.


    Make an artikel on it - please - with the alternatives, call it better aproaches ... then you can link it, whenever needed (and it seems you will need it more often). You can use it in your Sig as well :D


    BR Sc0rp

    Re,

    +10 clients but only if the correct algorithm used on both server and switch

    Naaa, if you use a correct configured 802.3ad (LACP) on bonded interfaces, it will truely work/scale ... on NFS.
    Samba until v3 don't benefit from link aggregation ... (or is it "didn't benefit" ?) ... at all :(
    (unfortunately i didn't test SSH/SCP or FTP ... on my temporarily set up 4x1GBit-LACP bond)



    Sc0rp

    Re,


    seems to me that you mixed up your config on the bonded ethernets - on side with rr, and the other side with 802.3ad. Both sides have to be equal, so you should use an your Linux-Box mode 4 ...


    So the best conclusion without seeing your config is, that your IP connect works through the single line, while the bond isn't working at all ...


    But: how looks the bond on your switch? Is the LAG up or down? On your Linux-Box you can check it with:
    cat /proc/net/bonding/bond0 (please provide the output)


    Anyway, i only hope you have the right "environment" for bonding - under normal circumstances (private / home usecase) it's not worth the time wasted ...


    Sc0rp

    Re,


    please forgive @tkaiser - he's to much pro :D


    My understanding is, that in a RAID 5 on disk can fail while the other two still operate normally. Is that wrong?

    That is not exactly what it does, or your understanding is a bit wrong: RAID-5 uses a striped data algorythm to provide data redundancy N=1 (one drive in this array can fail without loosing data). So "The other two still operate normally" is not exactly what will hapenn, if one disk dies ... you have a degraded array then, and that means degraded performance too - along with no more redundancy (N=0) ...


    Like I said: the problem is reproducible. I rebuild the RAID, all three drives were working the way they should. I rebooted the system, suddenly, /dev/sdb was missing. This is the log from the point since when it was missing...


    Like I said: there never had been any clue in the syslog that would point to problem - it seems as if the system for some reason does not detect the /dev/sdb drive correctly...

    That seems to me like an sporadic issue, seen often last time: because of an unknown issue sometimes you can not write the DCB (superblock in md-term) to the disk (it is written to the first 4KiB of the disk ...), and then after reboot, it still holds nothing for the auto-detection ... adding the drive later on will then use a backup-superblock.


    Try this:
    - make a backup of you data
    ... then:
    - if the drive is failing after next reboot, zero the first sectors with dd
    - add the drive again to the array (as complete new drive)
    - wait for the rebuild finish
    - try reboot


    Sc0rp