How to create a partition with custom blokc size?

    • How to create a partition with custom blokc size?

      Hi,

      by default the block size for ext4 and xfs is 4k, which is fine for most people, but not for me.
      I need a block size of 64k, but when I create a partition and format it with xfs 64k block size, theres an error when I try to use it in openmediavault.

      Source Code

      1. mkfs.xfs -f -L NAS -b size=64k /dev/sdb1


      [Blocked Image: https://i.imgur.com/ucMFXWY.png]

      [Blocked Image: https://i.imgur.com/uE0iAFe.png]


      Source Code

      1. OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; mount -v --source '/dev/disk/by-label/NAS' 2>&1' with exit code '32': mount: mount /dev/sdb1 on /srv/dev-disk-by-label-NAS failed: Function not implemented in /usr/share/php/openmediavault/system/process.inc:182
      2. Stack trace:
      3. #0 /usr/share/php/openmediavault/system/filesystem/filesystem.inc(720): OMV\System\Process->execute()
      4. #1 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(921): OMV\System\Filesystem\Filesystem->mount()
      5. #2 [internal function]: OMVRpcServiceFileSystemMgmt->mount(Array, Array)
      6. #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
      7. #4 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('mount', Array, Array)
      8. #5 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('FileSystemMgmt', 'mount', Array, Array, 1)
      9. #6 {main}


      So, any idea how to accomplish a 64k block size in omv?
    • limone wrote:

      by default the block size for ext4 and xfs is 4k, which is fine for most people, but not for me.
      I need a block size of 64k, but when I create a partition and format it with xfs 64k block size, theres an error when I try to use it in openmediavault.
      What were you using 64k block sizes on before? xfs can't have a bigger block size than page size. And since Debian's kernel uses 4k page size...

      # getconf PAGE_SIZE
      4096
      omv 5.0.10 usul | 64 bit | 5.0 proxmox kernel | omvextrasorg 5.1.1
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • I didnt use xfs before, I'm just trying to find a way to have a filesystem with 64k block size on omv, reason:

      Source Code

      1. fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=20 --time_based --name seq_read --filename=/dev/sdb
      2. seq_read: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
      3. fio-2.16
      4. Starting 1 process
      5. Jobs: 1 (f=1): [R(1)] [100.0% done] [35856KB/0KB/0KB /s] [8964/0/0 iops] [eta 00m:00s]
      6. seq_read: (groupid=0, jobs=1): err= 0: pid=10701: Wed Sep 4 21:00:48 2019
      7. read : io=687860KB, bw=34391KB/s, iops=8597, runt= 20001msec
      8. slat (usec): min=8, max=2608, avg=16.41, stdev=13.83
      9. clat (usec): min=2, max=11113, avg=96.35, stdev=110.46
      10. lat (usec): min=68, max=11132, avg=112.77, stdev=112.14
      11. clat percentiles (usec):
      12. | 1.00th=[ 69], 5.00th=[ 75], 10.00th=[ 77], 20.00th=[ 80],
      13. | 30.00th=[ 81], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 89],
      14. | 70.00th=[ 93], 80.00th=[ 101], 90.00th=[ 110], 95.00th=[ 129],
      15. | 99.00th=[ 207], 99.50th=[ 286], 99.90th=[ 1256], 99.95th=[ 2224],
      16. | 99.99th=[ 4640]
      17. lat (usec) : 4=0.04%, 10=0.04%, 20=0.01%, 50=0.03%, 100=77.31%
      18. lat (usec) : 250=21.95%, 500=0.37%, 750=0.09%, 1000=0.04%
      19. lat (msec) : 2=0.08%, 4=0.04%, 10=0.02%, 20=0.01%
      20. cpu : usr=5.45%, sys=22.11%, ctx=171915, majf=0, minf=11
      21. IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
      22. submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
      23. complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
      24. issued : total=r=171965/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
      25. latency : target=0, window=0, percentile=100.00%, depth=1
      26. Run status group 0 (all jobs):
      27. READ: io=687860KB, aggrb=34391KB/s, minb=34391KB/s, maxb=34391KB/s, mint=20001msec, maxt=20001msec
      28. Disk stats (read/write):
      29. sdb: ios=171107/0, merge=0/0, ticks=16828/0, in_queue=16384, util=82.01%
      Display All



      Source Code

      1. fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=64K --numjobs=1 --iodepth=1 --runtime=20 --time_based --name seq_read --filename=/dev/sdb
      2. seq_read: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=1
      3. fio-2.16
      4. Starting 1 process
      5. Jobs: 1 (f=1): [R(1)] [100.0% done] [436.7MB/0KB/0KB /s] [6976/0/0 iops] [eta 00m:00s]
      6. seq_read: (groupid=0, jobs=1): err= 0: pid=11186: Wed Sep 4 21:03:21 2019
      7. read : io=8535.6MB, bw=436999KB/s, iops=6828, runt= 20001msec
      8. slat (usec): min=11, max=6109, avg=19.24, stdev=27.53
      9. clat (usec): min=2, max=16857, avg=123.61, stdev=140.16
      10. lat (usec): min=81, max=16910, avg=142.84, stdev=143.66
      11. clat percentiles (usec):
      12. | 1.00th=[ 91], 5.00th=[ 96], 10.00th=[ 98], 20.00th=[ 105],
      13. | 30.00th=[ 110], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 116],
      14. | 70.00th=[ 118], 80.00th=[ 122], 90.00th=[ 137], 95.00th=[ 159],
      15. | 99.00th=[ 249], 99.50th=[ 370], 99.90th=[ 1688], 99.95th=[ 2768],
      16. | 99.99th=[ 6624]
      17. lat (usec) : 4=0.02%, 10=0.02%, 20=0.01%, 50=0.01%, 100=11.95%
      18. lat (usec) : 250=87.00%, 500=0.63%, 750=0.11%, 1000=0.07%
      19. lat (msec) : 2=0.09%, 4=0.05%, 10=0.03%, 20=0.01%
      20. cpu : usr=4.80%, sys=19.86%, ctx=136550, majf=0, minf=24
      21. IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
      22. submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
      23. complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
      24. issued : total=r=136569/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
      25. latency : target=0, window=0, percentile=100.00%, depth=1
      26. Run status group 0 (all jobs):
      27. READ: io=8535.6MB, aggrb=436998KB/s, minb=436998KB/s, maxb=436998KB/s, mint=20001msec, maxt=20001msec
      28. Disk stats (read/write):
      29. sdb: ios=135884/0, merge=0/0, ticks=17114/0, in_queue=16716, util=83.75%
      Display All
    • I have an fio card as well. What OS are you testing on? Not sure what your card is rated for reads and writes but it is much slower than mine even with a 4k filesystem on it.
      omv 5.0.10 usul | 64 bit | 5.0 proxmox kernel | omvextrasorg 5.1.1
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • limone wrote:

      I don't know what you mean by fio card, fio is just a benchmark tool for Linux, I'm testing on the omv system itself.
      Fusion-IO cards have fio utilities as well. I thought that was what you were using.

      limone wrote:

      The disk is a wd red 4tb, but in an USB enclosure, and the controller seems to have poor performance under 64K blocksize.
      You are trying to optimize performance on a single USB disk?? Filesystem block size is not the right road to go down. If you need that much performance out of the system, either use faster disks and/or raid and/or attach it a sata port.
      omv 5.0.10 usul | 64 bit | 5.0 proxmox kernel | omvextrasorg 5.1.1
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • I run proxmox as a homeserver on a NUC, theres no sata :(
      The disks should be fine I guess, it's probably a weak usb sata controller, which is kinda disappointing for a 50€ disk enclosure.

      I don't really need hardcore performance, but 25MB/s is really weak, if you want to stream 4k content... I think even 1080p would be laggy, depending on the bitrate ofc.
    • limone wrote:

      I run proxmox as a homeserver on a NUC, theres no sata
      The disks should be fine I guess, it's probably a weak usb sata controller, which is kinda disappointing for a 50€ disk enclosure.

      I don't really need hardcore performance, but 25MB/s is really weak, if you want to stream 4k content... I think even 1080p would be laggy, depending on the bitrate ofc.
      There is definitely something wrong with your setup (probably the usb controller as you mention). Streaming 4k content shouldn't need a filesystem block size change. I do that with no special settings off a single WD red that have no problem saturating gigabit over nfs.
      omv 5.0.10 usul | 64 bit | 5.0 proxmox kernel | omvextrasorg 5.1.1
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!