very good alternative to plex is emby
Beiträge von Gutz-Pilz
-
-
thanks for the explanation
so (updated) process would be?
- build raid5 with 8TB HDDs on another machine (OMV)
- give same uuid with tune2fs
- transfer everything manually with rsync -avr
- turn off both machines
- unhook 2TB HDDs and hookup 8TB Raid to actual NAS
- bootup - everythings fine ?i'll keep you posted
-
dont have enough sata ports to run both raids simultanously
so process would be?
- build raid5 with 8TB HDDs on another machine
- transfer everything manually with rsync
- unhook 2TB HDDs and hookup 8TB Raid to actual NAS
- reconfigure all shares and stuffdoes the 8TB Raid5 work without any problems after transferring them into the nas-system ? will it be recognized "OOTB" ?
whats the rsync-flag to transfer all files with the given permissions ? -
Hi,
i ordered 3x 8tb to replace and increase my current capacity.
currently i have 5x 2TB.is it possible to replace my raid5 5x2tb with 3x8tb ?
Thanks.
-
looks promising. problem didnt come up again
-
okay. solved for the moment.
stopped docker service and avahi-daemon was able to restart.lets see if that is persistant - keep you guys posted tomorrow.
thanks for the help.gn8
-
thanks. tried this already. but doesnt work.
last thing i did - updating with apt-get upgrade -
systemctl status avahi-daemon.service
Code● avahi-daemon.service - Avahi mDNS/DNS-SD Stack Loaded: loaded (/lib/systemd/system/avahi-daemon.service; enabled) Active: failed (Result: exit-code) since Di 2016-10-04 23:05:06 CEST; 3min 0s ago Process: 30822 ExecStart=/usr/sbin/avahi-daemon -s (code=exited, status=255) Main PID: 30822 (code=exited, status=255) Okt 04 23:05:06 NAS systemd[1]: avahi-daemon.service: main process exited, code=exited, status=255/n/a Okt 04 23:05:06 NAS systemd[1]: Failed to start Avahi mDNS/DNS-SD Stack. Okt 04 23:05:06 NAS systemd[1]: Unit avahi-daemon.service entered failed state.
-
Code
cat /var/log/syslog | grep avahi Oct 4 23:05:06 NAS avahi-daemon[30822]: Found user 'avahi' (UID 105) and group 'avahi' (GID 112). Oct 4 23:05:06 NAS avahi-daemon[30822]: Successfully dropped root privileges. Oct 4 23:05:06 NAS avahi-daemon[30822]: chroot.c: fork() failed: Resource temporarily unavailable Oct 4 23:05:06 NAS avahi-daemon[30822]: failed to start chroot() helper daemon. Oct 4 23:05:06 NAS systemd[1]: avahi-daemon.service: main process exited, code=exited, status=255/n/a Oct 4 23:05:06 NAS systemd[1]: Unit avahi-daemon.service entered failed state.
-
Samba wasn't working. So i tried to deactivate samba in the webgui and than starting samba again.
But when i try to deactivate i get this:Bash
Alles anzeigenFehler #0: exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; systemctl start avahi-daemon 2>&1' with exit code '1': Job for avahi-daemon.service failed. See 'systemctl status avahi-daemon.service' and 'journalctl -xn' for details.' in /usr/share/php/openmediavault/system/process.inc:174 Stack trace: #0 /usr/share/php/openmediavault/system/systemctl.inc(83): OMV\System\Process->execute(Array, 1) #1 /usr/share/php/openmediavault/system/systemctl.inc(140): OMV\System\SystemCtl->exec('start', NULL, false) #2 /usr/share/openmediavault/engined/module/zeroconf.inc(61): OMV\System\SystemCtl->start() #3 /usr/share/openmediavault/engined/rpc/config.inc(189): OMVModuleZeroconf->startService() #4 [internal function]: OMVRpcServiceConfig->applyChanges(Array, Array) #5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array) #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(150): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array) #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(517): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusJ0...', '/tmp/bgoutputSY...') #8 /usr/share/php/openmediavault/rpc/serviceabstract.inc(151): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure)) #9 /usr/share/openmediavault/engined/rpc/config.inc(208): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array) #10 [internal function]: OMVRpcServiceConfig->applyChangesBg(Array, Array) #11 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array) #12 /usr/share/php/openmediavault/rpc/rpc.inc(84): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array) #13 /usr/sbin/omv-engined(516): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1) #14 {main}
-
I think it is a important plugin.
i mean - it isnt super important. its nice to have a overview via webgui of all running container and images.
but it is super easy to create and watch containter via cli.
and i feel like - that it is more complicated to get a docker running over the plugin, as over cli -
when will it be readyfor the latest OMV 3?
-
Emby server in a docker as backend - love it! Not dealing with kodi-server or mysql. So great
-
WTF! was planned to work like a TimeMachine also i'm astonished about this... is there some workaround to have a working afp net? maybe handmade package without interface in OMV?
this looks promising
https://hub.docker.com/r/odarriba/timemachine/ -
okay. nvmnd.
deleted sde and now recovering the raid5 -
Code
Jun 11 23:24:12 NAS kernel: md: md127 stopped. Jun 11 23:24:12 NAS kernel: md: bind<sdd> Jun 11 23:24:12 NAS kernel: md: bind<sde> Jun 11 23:24:12 NAS kernel: md: bind<sdc> Jun 11 23:24:12 NAS kernel: md: bind<sdb> Jun 11 23:24:12 NAS kernel: md: bind<sdf> Jun 11 23:24:12 NAS kernel: md: kicking non-fresh sde from array! Jun 11 23:24:12 NAS kernel: md: unbind<sde> Jun 11 23:24:12 NAS kernel: md: export_rdev(sde)
this is what i found with journalctl
-
Hi there. i have a strange problem
my Raid5 is missing a drive UU_UUCode
Alles anzeigenroot@NAS:~# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md127 : active raid5 sdf[4] sdb[5] sdc[3] sdd[1] 7814051840 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UU_UU] unused devices: <none>
Code
Alles anzeigenroot@NAS:~# blkid /dev/sdb: UUID="7f07d04a-bb09-a600-353d-2ad9b554df48" UUID_SUB="91b7283b-d2a1-8914-811e-8e28071bec7e" LABEL="NAS:HD" TYPE="linux_raid_member" /dev/sdc: UUID="7f07d04a-bb09-a600-353d-2ad9b554df48" UUID_SUB="30422e32-92f0-0aee-8563-af7cc8690942" LABEL="NAS:HD" TYPE="linux_raid_member" /dev/sda1: UUID="5f4ba204-2be7-4405-85b1-839076f69e03" TYPE="ext4" PARTUUID="b8b58244-01" /dev/sda5: UUID="3676c23c-7683-43a8-b777-4073d1c04cfb" TYPE="swap" PARTUUID="b8b58244-05" /dev/sde: UUID="7f07d04a-bb09-a600-353d-2ad9b554df48" UUID_SUB="1e1bce59-2b80-7d83-e9ba-411917b52133" LABEL="NAS:HD" TYPE="linux_raid_member" /dev/sdf: UUID="7f07d04a-bb09-a600-353d-2ad9b554df48" UUID_SUB="97015689-b8fc-a1a2-8614-ed14ded2d059" LABEL="NAS:HD" TYPE="linux_raid_member" /dev/sdd: UUID="7f07d04a-bb09-a600-353d-2ad9b554df48" UUID_SUB="9934de2d-c090-6082-36f7-796ea11f2dce" LABEL="NAS:HD" TYPE="linux_raid_member" /dev/sdg1: LABEL="Other" UUID="19ad4901-a651-4001-9306-425956e2c501" UUID_SUB="c408e687-bce8-4dbc-ae4e-a15faa29dd16" TYPE="btrfs" PARTUUID="024ad1b2-c91a-4f02-a044-adfaac0b2612" /dev/md127: LABEL="NASHD" UUID="5a24e136-09b9-48e1-95db-b44d5db3e28a" TYPE="ext4" /dev/loop0: UUID="68817de3-c333-4e89-baf6-e02d8c3356e6" TYPE="xfs" /dev/mapper/docker-8:1-3932199-pool: UUID="68817de3-c333-4e89-baf6-e02d8c3356e6" TYPE="xfs" /dev/mapper/docker-8:1-3932199-69f44b98593940050a9ae91a6084853aff756fc061bfd7cffef628f21b01dbe6: UUID="68817de3-c333-4e89-baf6-e02d8c3356e6" TYPE="xfs" /dev/mapper/docker-8:1-3932199-8430e86a5209034e95d06e41def8c9d6f6ad0fa2dbbd7083db541975358f01de: UUID="68817de3-c333-4e89-baf6-e02d8c3356e6" TYPE="xfs" /dev/mapper/docker-8:1-3932199-e31e11202393507b583e8dc80212c2774cc88a3602d6d7a57574b44b0aa22f16: UUID="68817de3-c333-4e89-baf6-e02d8c3356e6" TYPE="xfs"
Code
Alles anzeigenroot@NAS:~# fdisk -l Disk /dev/sdb: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sdc: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sda: 119,2 GiB, 128035676160 bytes, 250069680 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xb8b58244 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 239855615 239853568 114,4G 83 Linux /dev/sda2 239857662 250068991 10211330 4,9G 5 Extended /dev/sda5 239857664 250068991 10211328 4,9G 82 Linux swap / Solaris Disk /dev/sde: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sdf: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sdd: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sdg: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: D5030E3E-0E41-4808-B7B1-1C37F502D619 Device Start End Sectors Size Type /dev/sdg1 2048 3907029134 3907027087 1,8T Linux filesystem Disk /dev/md127: 7,3 TiB, 8001589084160 bytes, 15628103680 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 524288 bytes / 2097152 bytes Disk /dev/loop0: 100 GiB, 107374182400 bytes, 209715200 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/loop1: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/docker-8:1-3932199-pool: 100 GiB, 107374182400 bytes, 209715200 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 65536 bytes / 65536 bytes Disk /dev/mapper/docker-8:1-3932199-69f44b98593940050a9ae91a6084853aff756fc061bfd7cffef628f21b01dbe6: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 65536 bytes / 65536 bytes Disk /dev/mapper/docker-8:1-3932199-8430e86a5209034e95d06e41def8c9d6f6ad0fa2dbbd7083db541975358f01de: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 65536 bytes / 65536 bytes Disk /dev/mapper/docker-8:1-3932199-e31e11202393507b583e8dc80212c2774cc88a3602d6d7a57574b44b0aa22f16: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 65536 bytes / 65536 bytes
when i try
Codemdadm: looking for devices for /dev/md127 mdadm: Found some drive for an array that is already active: /dev/md/HD mdadm: giving up.
can someone lead me how to get "sde-drive" back in my raid5 system ?
-
Auch sehr gut zu wissen.
Hat jemand erfahrung mit dem 4.3 Kernel unter debian im zusammenhang mit OMV ?
4.3 wird anscheinend vorausgestzt um die obchip GPU zum laufen zu bekommen... -
Guter Einwand. Wifi bräuchte ich zwar nicht aber die graka recherchiere ich mal
mir würde ja die onboard Gpu ausreichen. das wäre die Intel HD 530 -
Hi Frank,
danke für den tipp. Ich habe aber schon ein Gehäuse (Bitfenix Prodigy Casemodded mit integriertem 9" Monitor )
Auch 5x2TB festplatten sind schon vorhanden. (langfristig nach und nach auf 6x4TB umrüsten)Ich bleibe also lieber, auch angesichts der Leistung, bei Custom Build.
LG