Upgrade from old OMV (2.2.14) to recent OMV (5 or 6 after release)

  • I shall leave this thread and hope that crashtest will continue to be of assistance, unlike some.

    No please. I really appreciate your input! And as you can see our dear friend is ignoring my request....

    Tom


    ----


    HP N54L, 6GB, 5disc Raid5, SSD Boot with OMV 5
    HP N54L, 16GB, 4disc ZFS pool, SSD Boot with other NAS system

  • tom_tav

    See this -> post (above) for the ZFS command lines I mentioned but didn't originally include.

    Thanks a lot. I think i will skip the compression, the little HPs are no performance monsters and are getting bogged down by plex already.


    Btw. in theory it should be possible to import a existing ZFS pool from a FreeBSD machine, no? (as long as the zfs pool version <= the max version on debain)

    Tom


    ----


    HP N54L, 6GB, 5disc Raid5, SSD Boot with OMV 5
    HP N54L, 16GB, 4disc ZFS pool, SSD Boot with other NAS system

    Einmal editiert, zuletzt von tom_tav ()

    • Offizieller Beitrag

    in theory it should be possible to import a existing ZFS pool from a FreeBSD machine, no? (as long as the zfs pool version <= the max version on debain)

    In theory, yes. The version depends on how "old" your pool is and when FreeBSD -> adopted ZOL (ZFS on Linux). As you say; if the ZOL pool version = or is lower than the current Debian version, it should import.

  • The FreeBSD pool:


    Pool version 5000 w. feature flags:

    zpool get all ZFS_POOL | grep feature@

    ZFS_POOL feature@async_destroy enabled local

    ZFS_POOL feature@empty_bpobj active local

    ZFS_POOL feature@lz4_compress active local

    ZFS_POOL feature@multi_vdev_crash_dump enabled local

    ZFS_POOL feature@spacemap_histogram active local

    ZFS_POOL feature@enabled_txg active local

    ZFS_POOL feature@hole_birth active local

    ZFS_POOL feature@extensible_dataset enabled local

    ZFS_POOL feature@embedded_data active local

    ZFS_POOL feature@bookmarks enabled local

    ZFS_POOL feature@filesystem_limits enabled local

    ZFS_POOL feature@large_blocks enabled local

    ZFS_POOL feature@sha512 enabled local

    ZFS_POOL feature@skein enabled local

    ZFS_POOL feature@device_removal disabled local

    ZFS_POOL feature@obsolete_counts disabled local

    ZFS_POOL feature@zpool_checkpoint disabled local

    Tom


    ----


    HP N54L, 6GB, 5disc Raid5, SSD Boot with OMV 5
    HP N54L, 16GB, 4disc ZFS pool, SSD Boot with other NAS system

    Einmal editiert, zuletzt von tom_tav ()

  • geaves How is your ZFS performance on the little HP? Filling the ZFS array i get at max ~40mb avg (with cp, with rsync even less around 30mb avg) with short spikes above that (i restore the data from a local esata connected 8TB drive).


    The backup from the old mdraid was much faster (twice at minimum). I wonder if the Sata HW is not strong enough on this machine.


    I have no compression enabled on this pool cause 98% of the files are media files

    Tom


    ----


    HP N54L, 6GB, 5disc Raid5, SSD Boot with OMV 5
    HP N54L, 16GB, 4disc ZFS pool, SSD Boot with other NAS system

    • Offizieller Beitrag

    40mb 8| I get anything up to 110mb, sometimes it drops but never below 90mb and I have compression enabled even though this is used predominantly for media.


    I have 2xSeagate Ironwolf's and 2xWD Red's which form the array, these are connected to HP's backplane.

  • Strange, no? I have to wait till the restore is finished, will try to benchmark the raid then. At least it should be able to saturate the GB Ethernet, so ~100mb are the target (which my OMV2 and Xigma could do). Maybe the reason is that Source and 4 Target disks (pool) are on the same SATA controller ... lets see

    Tom


    ----


    HP N54L, 6GB, 5disc Raid5, SSD Boot with OMV 5
    HP N54L, 16GB, 4disc ZFS pool, SSD Boot with other NAS system

  • I have just the IO stats from the source 8TB disk cause the diskperformance is not avail for the ZFS raid:

    The left part (till Fri 20:00) was the operation with rsync (mostly audio files for tracks and albums), from then on with cp (bigger video files).


    First i thought the hdd firmware was throtteling cause it run hot (61 deg C), but after cooling it down to 33 deg C there was no difference.

    Tom


    ----


    HP N54L, 6GB, 5disc Raid5, SSD Boot with OMV 5
    HP N54L, 16GB, 4disc ZFS pool, SSD Boot with other NAS system

  • i have openmediavault-diskstats installed, which doesnt show the zfs raid


    but i got the same data from rsync

    Tom


    ----


    HP N54L, 6GB, 5disc Raid5, SSD Boot with OMV 5
    HP N54L, 16GB, 4disc ZFS pool, SSD Boot with other NAS system

  • Seems the Speeds are ok for this machine. My 3+ days restore w. avg 40mb transferrate could be the overload of the internal ports!?



    Scandisk 64GB SSD (SDSSDP064G) (on the 5th internal Sata Port, 3GB Link enabled w. mod Bios)


    Write zeros, this ssd is slow:

    root@media:/INTERNAL# dd if=/dev/zero of=/testfile bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.0112 s, 89.4 MB/s


    Read testfile with zeros, cache at work:
    root@media:/INTERNAL# dd if=/testfile of=/dev/nul bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.29201 s, 831 MB/s


    Read movie file, acceptable read performance:
    root@media:/INTERNAL# dd if=/zzzz.mp4 of=/dev/nul bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.44178 s, 242 MB/s



    ZFS Raid (Z1, no compression, 4x 3TB ST33000651AS):


    Write zeros:
    root@media:/INTERNAL# dd if=/dev/zero of=/INTERNAL/testfile bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.8095 s, 122 MB/s


    Read testfile with zeros, cache at work:
    root@media:/INTERNAL# dd if=/INTERNAL/testfile of=/dev/nul bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.89859 s, 566 MB/s


    Read movie file:
    root@media:/INTERNAL# dd if=/INTERNAL/VIDEO/MOVIES/zzzz.mp4 of=/dev/nul bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.05504 s, 177 MB/s



    Here are the crosscopy operations, they are slower then the /dev/zero operations above:


    RAID -> SSD, movie file:


    root@media:/INTERNAL# dd if=/INTERNAL/VIDEO/MOVIES/zzzz.mp4 of=/zzzz2.mp4 bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 16.4298 s, 65.4 MB/s



    SSD -> RAID, movie file:


    root@media:/INTERNAL# dd if=/zzzz.mp4 of=/INTERNAL/zzzz2.mp4 bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 10.3434 s, 104 MB/s



    P.S. i can saturate the GB connection!

    Tom


    ----


    HP N54L, 6GB, 5disc Raid5, SSD Boot with OMV 5
    HP N54L, 16GB, 4disc ZFS pool, SSD Boot with other NAS system

    Einmal editiert, zuletzt von tom_tav ()

    • Offizieller Beitrag

    My 3+ days restore w. avg 40mb transferrate could be the overload of the internal ports!?

    Possibly, but it would be a logical explanation, TBH when I did a clean install of 5 I just left rsync to copy the data back, it finished when it finished, but the general copy is perfectly OK, I've had more issues with my W10 network driver until I installed an older version.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!