
[HOWTO] Instal ZFS-Plugin & use ZFS on OMV
-
- OMV 1.0
- raulfg3
-
-
I don't believe so. I have yet to read any material that discusses data safety issues with files being in the top level dataset.
-
Hi,
what do you think about lz4 compression? Is it applicable to compress 1080p movies?
What about the performance while streaming?
Regards Hoppel
-
-
I've been running with lz4...always, for years. I think that unless your server is a potato, you should be fine performance wise. In terms of compression ratio...I doubt you'll see much if the only thing on your filesystem is compressed video.
-
Hey guys,
I need a little help with smb home directory and zfs snapshots. I want to use the windows shadow_copy but cant get it to work.
Ive added a few lines to the extra options in the smb configuration (Screenshot 1) and added a line in the "30homes" config file of smb (Screenshot 2), but shadow copies dont show up in Windows. Any advise? -
By Volume do you mean "dataset"? What do you mean by Filesystem then? Is that "dataset"... Confusing.
Also why do you need to specify size for volume... this shouldn't be required in zfs.Volumes in zfs are virtual block devices, like when you create loop device. YOu have to specify a size, same as when you create a loop device.
Filesystems are just paths with in the pool but they have properties, can be "snapshoted" and probably can't transverse in between each other, someone might confirm this. For example winscp should not let you move a file in between two filesystems, same as happens in traditional linux fs -
-
I've been running with lz4...always, for years. I think that unless your server is a potato, you should be fine performance wise. In terms of compression ratio...I doubt you'll see much if the only thing on your filesystem is compressed video.
I have an actual Xeon-CPU.
What about streaming music or 1080p-movies from your zfs-filesystem? Do you recognize that the files must be decompressed before playback.
-
I'm running Plex on that server and have yet to notice any issues, the CPU is barely working when it's streaming. I believe folks are starting to enable it by default.
-
Ok, perfekt. Thanks for the information. omv isn't doing it by default, so I manually have to configure it per command line.
I read about transparent compression and the great performance of lz4, but the informations in the web about compression in combination with streaming are rare.
-
-
To be fair, my compression ratio is currently sitting at "1.0" so the compression hasn't benefitted me that much at the moment :-/
-
Ok, how can I adjust the ratio?
-
You can't. zfs tries to compress everything. video and music really aren't compressible anymore than they currently are. The resulting size difference determines what ratio you are achieving.
-
-
The compression ratio is a read only property that tells you how compressed your data. It just means it hasn't compressed too much of mine.
-
what do you think about lz4 compression? Is it applicable to compress 1080p movies?
You want to save some space? Get yourself a decent HEVC capable player and reencode your 1080p files from h.264 to h.265.
Greetings
David -
Ok, so I don't need to activate the compression. 99% of my data is music and videos.
Offtopic: Which h.264/h.265-encoder can you recommend?
Thanks for all your answers. This really helps in setting up my first zfs.
-
-
Which h.264
x264
h.265-encoder can you recommend?
x265 - http://x265.org/ - You can use handbrake to convert files.
Greetings
David -
x265 - x265.org/ - You can use handbrake to convert files.
Ok, thanks, I will check it. Never used handbrake before, but heard about it.
For your information, maybe it's the wrong place to report bugs for beta software. While playing around with creating pools, I recognized that creating pools "by-id" doesn't work with the omv-webui. I had to create the pool with the command line, to get that to work. I am on omv 3.0.13, omvextrasorg 3.0.11 and use the stable "zfs" repository. Yeah, I know it's beta. I only want to inform you. Maybe it works as expected with the "zfs testing" repository.
Greetings Hoppel
-
Hey guys,
made some performance checks with lz4. Here are the results:
I have got two empty filesystems. One (/mediatank/videosc) with lz4 compression enabled and one (/mediatank/videos) without compression.
Coderoot@mediatank:/# zfs get compression mediatank/videosc NAME PROPERTY VALUE SOURCE mediatank/videosc compression lz4 local root@mediatank:/# zfs get compression mediatank/videos NAME PROPERTY VALUE SOURCE mediatank/videos compression off default
First I built data with dd in the compressed filesystem:
Coderoot@mediatank:/# dd if=/dev/zero of=disk1.img bs=1024000k count=10 10+0 Datensätze ein 10+0 Datensätze aus 10485760000 Bytes (10 GB) kopiert, 3,43668 s, 3,1 GB/s
Now I do the same in the uncompressed filesystem:
Coderoot@mediatank:/# dd if=/dev/zero of=disk1.img bs=1024000k count=10 10+0 Datensätze ein 10+0 Datensätze aus 10485760000 Bytes (10 GB) kopiert, 8,06394 s, 1,3 GB/s
Building the data in the compressed filesystem is round about 5sec faster.
Now I want to have a look at the size:
Coderoot@mediatank:/# df -h /mediatank/videosc/ Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf mediatank/videosc 20T 256K 20T 1% /mediatank/videosc root@mediatank:/# df -h /mediatank/videos/ Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf mediatank/videos 20T 9,8G 20T 1% /mediatank/videos
Coderoot@mediatank:/# df -k /mediatank/videosc/ Dateisystem 1K-Blöcke Benutzt Verfügbar Verw% Eingehängt auf mediatank/videosc 21435487232 256 21435486976 1% /mediatank/videosc root@mediatank:/# df -k /mediatank/videos/ Dateisystem 1K-Blöcke Benutzt Verfügbar Verw% Eingehängt auf mediatank/videos 21445735552 10248576 21435486976 1% /mediatank/videos
The compression for this kind of data is really impressive. The size is nearly zero.
Now, I want to copy the same 1080p-movie (h.264) to both filesystems to compare the results:
Coderoot@mediatank:/# df -h /mediatank/videosc/ Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf mediatank/videosc 20T 30G 20T 1% /mediatank/videosc root@mediatank:/# df -h /mediatank/videos/ Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf mediatank/videos 20T 31G 20T 1% /mediatank/videos
Coderoot@mediatank:/# df -k /mediatank/videosc/ Dateisystem 1K-Blöcke Benutzt Verfügbar Verw% Eingehängt auf mediatank/videosc 21414228736 30552704 21383676032 1% /mediatank/videosc root@mediatank:/# df -k /mediatank/videos/ Dateisystem 1K-Blöcke Benutzt Verfügbar Verw% Eingehängt auf mediatank/videos 21415182976 31506944 21383676032 1% /mediatank/videos
So, the compression decreases the file size nearly 1GB : 21415182976 - 21414228736 = 954240 That isn't half bad.
A look at "top -u htpc" (htpc is my htpc-user) shows me a cpu usage by raound about 1% in both cases, streaming the compressed or streaming the uncompressed video over a samba share. So for the cpu it is not a problem.
My compressionratio is also by about 1 %.
Coderoot@mediatank:/# zfs get all mediatank/videosc | grep compressratio mediatank/videosc compressratio 1.02x -
So, at the moment, for me it is an option to compress all my data with lz4.
Can you recommend some other checks I can do, before copying my whole library to the zfs filesystem?
Thanks and regards Hoppel
-
-
Building the data in the compressed filesystem is round about 5sec faster.
Try with bigger, real, data. Not just a bunch of zeroes.
The compression for this kind of data is really impressive. The size is nearly zero.
Yeah. You just wrote a bunch of zeroes. Text and Numbers can achieve a really high amount of compression, unlike media files.
Greetings
David -
Can you tell me a method to measure the throughput while copying files? With "cp" it is seemingly not possible.
Regards
Participate now!
Don’t have an account yet? Register yourself now and be a part of our community!