Beiträge von Gutz-Pilz

    thanks for the explanation


    so (updated) process would be?
    - build raid5 with 8TB HDDs on another machine (OMV)
    - give same uuid with tune2fs
    - transfer everything manually with rsync -avr
    - turn off both machines :)
    - unhook 2TB HDDs and hookup 8TB Raid to actual NAS
    - bootup - everythings fine ?


    i'll keep you posted :)

    dont have enough sata ports to run both raids simultanously


    so process would be?
    - build raid5 with 8TB HDDs on another machine
    - transfer everything manually with rsync
    - unhook 2TB HDDs and hookup 8TB Raid to actual NAS
    - reconfigure all shares and stuff


    does the 8TB Raid5 work without any problems after transferring them into the nas-system ? will it be recognized "OOTB" ?
    whats the rsync-flag to transfer all files with the given permissions ?

    okay. solved for the moment.
    stopped docker service and avahi-daemon was able to restart.


    lets see if that is persistant - keep you guys posted tomorrow.
    thanks for the help.


    gn8

    systemctl status avahi-daemon.service

    Code
    ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack
    Loaded: loaded (/lib/systemd/system/avahi-daemon.service; enabled)
    Active: failed (Result: exit-code) since Di 2016-10-04 23:05:06 CEST; 3min 0s ago
    Process: 30822 ExecStart=/usr/sbin/avahi-daemon -s (code=exited, status=255)
    Main PID: 30822 (code=exited, status=255)
    
    
    Okt 04 23:05:06 NAS systemd[1]: avahi-daemon.service: main process exited, code=exited, status=255/n/a
    Okt 04 23:05:06 NAS systemd[1]: Failed to start Avahi mDNS/DNS-SD Stack.
    Okt 04 23:05:06 NAS systemd[1]: Unit avahi-daemon.service entered failed state.
    Code
    cat /var/log/syslog | grep avahi
    Oct  4 23:05:06 NAS avahi-daemon[30822]: Found user 'avahi' (UID 105) and group 'avahi' (GID 112).
    Oct  4 23:05:06 NAS avahi-daemon[30822]: Successfully dropped root privileges.
    Oct  4 23:05:06 NAS avahi-daemon[30822]: chroot.c: fork() failed: Resource temporarily unavailable
    Oct  4 23:05:06 NAS avahi-daemon[30822]: failed to start chroot() helper daemon.
    Oct  4 23:05:06 NAS systemd[1]: avahi-daemon.service: main process exited, code=exited, status=255/n/a
    Oct  4 23:05:06 NAS systemd[1]: Unit avahi-daemon.service entered failed state.

    Samba wasn't working. So i tried to deactivate samba in the webgui and than starting samba again.
    But when i try to deactivate i get this:


    I think it is a important plugin.

    i mean - it isnt super important. its nice to have a overview via webgui of all running container and images.
    but it is super easy to create and watch containter via cli.
    and i feel like - that it is more complicated to get a docker running over the plugin, as over cli

    Code
    Jun 11 23:24:12 NAS kernel: md: md127 stopped.
    Jun 11 23:24:12 NAS kernel: md: bind<sdd>
    Jun 11 23:24:12 NAS kernel: md: bind<sde>
    Jun 11 23:24:12 NAS kernel: md: bind<sdc>
    Jun 11 23:24:12 NAS kernel: md: bind<sdb>
    Jun 11 23:24:12 NAS kernel: md: bind<sdf>
    Jun 11 23:24:12 NAS kernel: md: kicking non-fresh sde from array!
    Jun 11 23:24:12 NAS kernel: md: unbind<sde>
    Jun 11 23:24:12 NAS kernel: md: export_rdev(sde)


    this is what i found with journalctl

    Hi there. i have a strange problem
    my Raid5 is missing a drive UU_UU







    when i try

    Code
    root@NAS:~# mdadm --assemble --force --verbose /dev/md127 /dev/sde


    Code
    mdadm: looking for devices for /dev/md127
    
    
    mdadm: Found some drive for an array that is already active: /dev/md/HD
    
    
    mdadm: giving up.



    can someone lead me how to get "sde-drive" back in my raid5 system ?

    Hi Frank,


    danke für den tipp. Ich habe aber schon ein Gehäuse (Bitfenix Prodigy Casemodded mit integriertem 9" Monitor :) )
    Auch 5x2TB festplatten sind schon vorhanden. (langfristig nach und nach auf 6x4TB umrüsten)


    Ich bleibe also lieber, auch angesichts der Leistung, bei Custom Build.


    LG