Thats why i ask here if something is blocking on a vanilla fresh installed OMV 7....
P.S. if you read my first post it is even strange on ipvlan L2 ... there the host should be not interfering
Posts by tom_tav
-
-
Jellyfin is running fine. I guess its a docker host/container problem with the UDP multicast port for upnp. Apple TV can discover the Jellyfin server, the iOS App cant. And like written, the 1900 port is not open from the network side...
I did put the jellyfin DLNA loglevel on debug already but didnt see anything, need to dive in further
"Serilog": {
"MinimumLevel": {
"Default": "Warning",
"Override": {
"Microsoft": "Warning",
"System": "Warning",
"Jellyfin.Plugin.Dlna": "Debug"
}
},
-
Edit: its on a fresh installed OMV 7
Tried to get Jellyfin with upnp working, without success:
With the standard setting it was expected not to work, because it needs Multicast UDP:
ports:
- 8096:8096
- 8920:8920 #optional
- 7359:7359/udp #optional
- 1900:1900/udp #optional---
But with host mode it still does not work:
network_mode: host
# ports:
# - 8096:8096
# - 8920:8920 #optional
# - 7359:7359/udp #optional
# - 1900:1900/udp #optional
% sudo nmap -sU -p 7359,1900 192.168.X.X (on another machine)
PORT STATE SERVICE
1900/udp closed upnp
7359/udp open|filtered swx
---
With ipvlan L2 it is even more strange. Sometimes the 1900 port shows up, most of the times it doesnt (via nmap). Jellyfin client on iOS doesnt find the server via autodiscovery
What do i miss?
-
Thanks Raul!
-
Nobody hre who knows how the gui is working in this aspect????
-
If i create a network with composer gui, how can i add options?
Example: Need to create a ipvlan L2 network, so i need the ipvlan_mode option:
docker network create -d ipvlan --subnet NET --gateway GW -o parent=INT -o ipvlan_mode=l2 ip-vlan
-
THIS would be good to be in the documentation. Did run into the same hole, importing ZFS pool and then not seeing it in the file systems.
In the zfs pool screen, even though the pool was visible, I checked the discover button.
This then showed up as mounted in File Sytems and the rest was easy, added SMB and NFS shares and appropriate user access.
-
I know that its generally a bad idea to use symlinks in shares
Is there another more smart solution to archive this:
I have on my media servers some directories i like to share for my media devices. These directories are scattered in the main structure:
e.g.
AUDIO/blahblah/iTunes/Files
AUDIO/gluglu/recordings/X
....
I can create a share for each of these dirs, but then i need to connect to X shares on each media device
What i like to do is to put them together on one share, e.g. MEDIA
I could create a MEDIA dir, share it and do symlinks to the needed directories. But afaik i can enable symlinks outside of the share just globally. Which i would like to avoid.
I tried to use the sharedFolderFS, but it didnt work as expected (ZFS volume)
Any ideas?
-
Based on the samba 4.9.x functions and the guide https://wiki.samba.org/index.p…Work_Better_with_Mac_OS_X i discovered a few small things:
1. the comment above "fruit:aaple = yes" in the smb.conf [global] section is not quite correct:
# Special configuration for Apple's Time Machine
technically correct it should be
# enable Apple's SMB2+ extension codenamed AAPL
2. for the shares sections you didnt add the "#Extra options" comment line when generating the smb.conf
3. would be good to add the switch for ea support also in the global section (right now it just exists under shares, can be done via extra options so it works right now)
and last but not least the really important thing/bug:
4. inside the [shares] there is "vfs objects =" .... not very useful cause it stays inside the configuration even if you add the needed "vfs objects = fruit streams_xattr" via the extra options and you have to do this for every share cause it overwrites the global definition (if you did one)
P.S. yes i did see what gets added when you enable Timemachine support (it makes more sense to create a separate TM share with quota). So at least when timemachine is not enabled you just should omit to put the empty "vfs_objects" into the section so the user can configure what he wants via extra options.
Sometimes you just want OSX compatible shares without timemachine. And if you create a TM share you would like to add the quota most of the time.
Code
-
Seems the Speeds are ok for this machine. My 3+ days restore w. avg 40mb transferrate could be the overload of the internal ports!?
Scandisk 64GB SSD (SDSSDP064G) (on the 5th internal Sata Port, 3GB Link enabled w. mod Bios)
Write zeros, this ssd is slow:
root@media:/INTERNAL# dd if=/dev/zero of=/testfile bs=1M count=1024 conv=fdatasync,notrunc
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.0112 s, 89.4 MB/s
Read testfile with zeros, cache at work:
root@media:/INTERNAL# dd if=/testfile of=/dev/nul bs=1M count=1024 conv=fdatasync,notrunc
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.29201 s, 831 MB/s
Read movie file, acceptable read performance:
root@media:/INTERNAL# dd if=/zzzz.mp4 of=/dev/nul bs=1M count=1024 conv=fdatasync,notrunc
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.44178 s, 242 MB/s
ZFS Raid (Z1, no compression, 4x 3TB ST33000651AS):
Write zeros:
root@media:/INTERNAL# dd if=/dev/zero of=/INTERNAL/testfile bs=1M count=1024 conv=fdatasync,notrunc
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.8095 s, 122 MB/s
Read testfile with zeros, cache at work:
root@media:/INTERNAL# dd if=/INTERNAL/testfile of=/dev/nul bs=1M count=1024 conv=fdatasync,notrunc
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.89859 s, 566 MB/s
Read movie file:
root@media:/INTERNAL# dd if=/INTERNAL/VIDEO/MOVIES/zzzz.mp4 of=/dev/nul bs=1M count=1024 conv=fdatasync,notrunc
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.05504 s, 177 MB/sHere are the crosscopy operations, they are slower then the /dev/zero operations above:
RAID -> SSD, movie file:
root@media:/INTERNAL# dd if=/INTERNAL/VIDEO/MOVIES/zzzz.mp4 of=/zzzz2.mp4 bs=1M count=1024 conv=fdatasync,notrunc
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 16.4298 s, 65.4 MB/s
SSD -> RAID, movie file:
root@media:/INTERNAL# dd if=/zzzz.mp4 of=/INTERNAL/zzzz2.mp4 bs=1M count=1024 conv=fdatasync,notrunc
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 10.3434 s, 104 MB/sP.S. i can saturate the GB connection!
-
i have openmediavault-diskstats installed, which doesnt show the zfs raid
but i got the same data from rsync
-
I have just the IO stats from the source 8TB disk cause the diskperformance is not avail for the ZFS raid:
The left part (till Fri 20:00) was the operation with rsync (mostly audio files for tracks and albums), from then on with cp (bigger video files).First i thought the hdd firmware was throtteling cause it run hot (61 deg C), but after cooling it down to 33 deg C there was no difference.
-
Strange, no? I have to wait till the restore is finished, will try to benchmark the raid then. At least it should be able to saturate the GB Ethernet, so ~100mb are the target (which my OMV2 and Xigma could do). Maybe the reason is that Source and 4 Target disks (pool) are on the same SATA controller ... lets see
-
geaves How is your ZFS performance on the little HP? Filling the ZFS array i get at max ~40mb avg (with cp, with rsync even less around 30mb avg) with short spikes above that (i restore the data from a local esata connected 8TB drive).
The backup from the old mdraid was much faster (twice at minimum). I wonder if the Sata HW is not strong enough on this machine.
I have no compression enabled on this pool cause 98% of the files are media files
-
The FreeBSD pool:
Pool version 5000 w. feature flags:
zpool get all ZFS_POOL | grep feature@
ZFS_POOL feature@async_destroy enabled local
ZFS_POOL feature@empty_bpobj active local
ZFS_POOL feature@lz4_compress active local
ZFS_POOL feature@multi_vdev_crash_dump enabled local
ZFS_POOL feature@spacemap_histogram active local
ZFS_POOL feature@enabled_txg active local
ZFS_POOL feature@hole_birth active local
ZFS_POOL feature@extensible_dataset enabled local
ZFS_POOL feature@embedded_data active local
ZFS_POOL feature@bookmarks enabled local
ZFS_POOL feature@filesystem_limits enabled local
ZFS_POOL feature@large_blocks enabled local
ZFS_POOL feature@sha512 enabled local
ZFS_POOL feature@skein enabled local
ZFS_POOL feature@device_removal disabled local
ZFS_POOL feature@obsolete_counts disabled local
ZFS_POOL feature@zpool_checkpoint disabled local
-
Thanks a lot. I think i will skip the compression, the little HPs are no performance monsters and are getting bogged down by plex already.
Btw. in theory it should be possible to import a existing ZFS pool from a FreeBSD machine, no? (as long as the zfs pool version <= the max version on debain)
-
I shall leave this thread and hope that crashtest will continue to be of assistance, unlike some.
No please. I really appreciate your input! And as you can see our dear friend is ignoring my request....
-
mi-hol Ok, lets leave the logging discussion out of this thread please, its off topic here

-
Well if you need accurate logs you use a log server with remote logging anyways....
-
On my Raid (the source) i did do new users/shares/.... with the new OMV5 install so there the ACLs changed.
I think i will use a thumbdrive for boot and do the backup on my other nas. If the thumbdrive breaks i just need to recreate it from this backup.