Posts by blazini36

    For the target OMV user the device file and the mount point of the file system is somewhat irrelevant. OMV does not use labels anymore because they made more troubles than they help.


    What is your target user?


    What troubles did labels cause? I've been using OMV since 3 or 4 and never had an issue with labels. I've ran OMV on an rpi4, an Odroid N2, an Odroid H2, and currently a R5 3600 with like 15 disks.......no issues with labels

    Feel free to submit a pull request. I don't have time to port much to OMV 6.x let alone this huge change that very few would use.

    Lol there's this notion that because you know a thing or 2 about Linux that you must be a programmer, I am not a programmer so I won't be submitting any pull requests. You missed my point though, you pretty much stated that everything should be done through the interface and I shouldn't be worried about mountpoints. And that's fine, I understand that but the point is that my use for snapraid is outside the scope of the interface. I'm not saying you need to fix the snapraid plugin, I'm explaining why I wave to diverge from the interface.


    A one time symlink with the symlinks plugin can give you an even better path like /srv/diska/dirx. I do that myself. What wouldn't it solve?

    Well I've never tried that, I actually try not to diverge from the interface as you've suggested several times. It's hard to tell what's going to break OMV so I haven't experimented with that. It's obvious you're a dev and I thank you for the good work, but one thing I know about software guys is they get a bit overprotective of there design choices. This thing is an annoyance, it's not a dealbreaker. It doesn't really detract from OMV. I really just don't understand the reasoning for the design change itself that's the whole point of the post. Point is that with labels there was nothing to work around.


    You must be talking about a different kind of noob than I am. The noobs I have been supporting for over a decade are afraid to the use the command line.


    There are plenty of power users who use the command line. Can't have a plugin for everything either.

    The fact that OMV maintains mount point has never changed. ubuntu doesn't have any automatic mountpoint creation. So, the fact that ubuntu doesn't overwrite changes is not surprising.

    If you use the command line as much as it sounds, why not use mc? Writing a filebrowser plugin would be a project in itself.

    Again sort of missing the point. How many posts on here have someone with an issue and the response is that they should ssh in and do xyz? If someone is that scared of CLI they are probably using the wrong software, even Ubuntu can't keep people away from the terminal.


    I don't use the CLI that often for OMV, I'm just not scared to do so when I have to. I've used Linux for many years which is the whole reason I built a NAS that runs Linux. The point about ubuntu is you brought it up because Ubuntu writes fstab with UUID's for disks that exist during install. OK but there's no management interface that is going to rewrite the change in fstab after you make it, it's not a 1:1 comparison.


    I wasn't asking anyone to write a file browser plugin, again that wasn't the point. The point was simply that there is no file browser plugin so to get above a share I have to SSH in.........and that's completely fine. Point was when that is the case this new thing becomes an inconvenience. But yes I do use mc from time to time, glad you brought that up.............



    See how easy it is to identify which disk I'm trying to access when it is "dev-disk-by-label-xyz"? That first weird one is my MergerFS mount.....no big deal, I know what it is. Now those "/dev-dev-by-uuid-1234" mounts I have no idea what those disks are. Now if all of these disks were by-uuid it would be really confusing. What's even more confusing is if it's a mergerFS disk without path preservation, it has all of the same directories as every other MergerFS disk so then ya have to run blkid to match the label to UUID to figure out what's what.


    So OK, I concede that you can do stuff with more symlinks, but with labels you don't have to. So whyyyyyyyyyyyyy..........labels were so much betterrrrrrrrrrrrrrrrrr!

    Everything OMV creates should be between the omv tags and is maintained by OMV. What difference does it make if it is a mess?

    Because not every scenario is covered by OMV as I explained my reason for coming across the issue. I have to manually create and maintain a 2nd snapraid config because the plugin can't handle it. If I named a disk "DiskA" I know exactly what it's mountpoint is if it's done by label, it is /srv/dev-disk-by-label-DiskA and I can write that config rather easily. If I need to navigate that filesystem over SSH I know where I'm going, it starts /srv/dev-disk-by-label-DiskA/dirX. Now I have to keep note of the UUID of the disk just to navigate that........how is that "noob friendly"?


    Not sure why. It is very common and even the default for non-LVM partitions mounted by the Ubuntu installer. And since the goal of OMV is to do everything from the web interface, the mount point means little and the least problematic path should be used. It is much, much more likely that someone will put two drives in a system with the same label.

    You pretty much just summed up the reason they should not have done this right there. The goal is to do everything from the web interface, and that's fine, but the reality is if you just look at posts on this forum is that'll pretty much never be the case. At any point when someone using Ubuntu learns about mounts and fstab they can (and will) adjust the mountpoint to be whatever they want and it will not be overwritten. That is not the case with OMV. I can't change the mountpoint but if it is done by label I gave the disk that label and it is familliar. The web UI has no real file browser, if I share a disk I can only ever navigate down through the clients filebrowser. If I need to get above the share I need to SSH in and navigate to the mountpoint. That gets inconvenient if I have 10 disks mounted by UUID



    Why is a noob using the filesystem mount point? If docker is what you are referring to, they just need to look at the mount point in the Filesystems tab. There is nothing to really understand in this case.

    A large portion of the OMV user base are noobs.

    That's kind of a silly thing to say, you can't actually be a noob for very long. Besides that everyone here who started using OMV was a noob at some point by definition.......we all made it through, and I doubt many people complained about disks being mounted by label rather than ID. This is the type of thing you would encounter if you bought a NAS box from some company and used their proprietary interface no one here did that, they built it, and they need to do maintenance on it that they wouldn't likely be doing on a Synology.

    I had same question week or so ago. The answer I got was that users that have no idea about Linux plug in and plug out usb dives without bothering to mount and then umount them. So like with everything else we have to bring all things to the lowest common denominator. And to answer your last questions, I got resounding NO.

    That doesn't seem like a very good reason unless I'm missing something. That is a good reason why OMV doesn't mount based on /dev/sda etc because that will always change unless forced by UUID.


    The only way not properly unmounting a removeable drive is affected worse by using a label symlinked mountpoint vs a UUID symlinked mountpoint is if they had the same label I suppose. It would have been nice to have that as a configureable option because now my fstab is a mess.

    Been running OMV for a while now and I've added a few disks through the web interface, nothing abnormal seeming there. I have to setup a separate 2nd snapraid array because the original array is getting a bit complicated. Snapraid allows this but the plugin interface in OMV does not so I have to set it up manually.


    Anyway I went to check the disk mountpoints and everything added recently was mounted to /srv/dev-disk-by-uuid-##### rather than /dev/disk/by-label/XXX. I have 3 disks mountin like this now and it's kind of annoying to deal with when I'm used to looking at disks by label. I created the expected "/srv/dev-disk-by-label-" mountpooint directories manually and edited fstab manually to mount the drives to those mountpoint. OMV web interface wouldn't start after that so I had to restore the backup fstab file. probably had something to do with an OMV config file not matching


    The symlinks for /dev/disk/by-label/XXX do exist for the new disks, I don't understand why it's using the UUID symlinks to mount these new disks..... any way I can fix these mount points?

    So I did exactly as I said above, and the sync ran with no errors at all. Then I completely removed the 4 recently added disks from the snapraid config and sync'd again, as expected the sync really didn't have much to do and it didn't complain about anything. Kind of an odd bug that adding 4 partially full disks to the pool caused IO errors with a disk that's been there the whole time but good that it's back to normal.

    First, 15 disks is a LOT of disks. 15 disks must be getting into use scenarios that have not been thoroughly tested by the DEV.

    I wouldn't say 15 disks is a "lot". Trying to keep free space on disks and the different use case for each and the disks add up quickly.


    That said you might be right about "scenarios that have not been thoroughly tested by the DEV." Not in regards to the number for disks, but the way I added the last few and the fact that they're on different interfaces. My previous snapraid use and the syncs were just for the mergerFS pool of spinning SATA disks. I have 2 more SATA spinners, a SATA SSD and an NVME that had no redundancy so I added them to the same existing snapraid pool. There's nothing that says this can't be done, you can add partially filled disks to snapraid, though maybe there's just too many differences here.


    Quote

    - Have you checked to see if this is related to a specific "port". I/O errors could be related to the SATA/SAS port itself. (Hardware can fail and SATA/SAS cables may need to be reseated - both ends. On a rare occasion a cable may go bad. SMART STAT 199, CRC's errors, is an indication of hardware link and / or interface issues.)


    I mentioned above, I've swapped the disks in the bays. They're in hotswap cages and each has its own SATA port going to a SAS card. This isn't a problem for snapraid, it doesn't care about the port used. The disk that originally had the errors has no SMART fail/prefail, neither does it's replacement.


    Quote

    - If you're convinced nothing is wrong with the disks, without disturbing data, you could wipe your SNAPRAID installation and start again.


    Well there doesn't appear to be a simple way to actually do that in snapraid other than maybe manually wiping the parity disks and content and config files. This is pretty much what I was asking about here, how to do it. "snapraid sync -R" would seem to be it but it wants all disks present. So I wouldn't be able to remove the added disks from the array at the same time. Seems messy since all I've had were partial syncs of all 15 disks since then


    According to the manual you have to change the config file to point the disk you want to remove to and empty directory and remove the .content reference from the config. So I did that for all of the newly added disks and left the original pool in tact with the replacement disk for the one that was throwing errors just as it was. I'm now running "snapraid sync -E", so it'll be using the original parity for the disks that remain. If that goes without error I'll get them out of the config entirely and see what I can do about setting up a separate parity for those disks.

    Have you run fsck on the drive?

    On which drive?


    The original disk:

    Code
    $ sudo fsck /dev/sdd1
    fsck from util-linux 2.37.1
    e2fsck 1.46.2 (28-Feb-2021)
    PoolDisk7: clean, 5944/244195328 files, 848717657/976754385 blocks


    New disk

    Code
    sudo fsck /dev/sdd1
    fsck from util-linux 2.37.1
    e2fsck 1.46.2 (28-Feb-2021)
    PoolDisk7: clean, 4592/183144448 files, 713659279/1465130385 blocks

    I posted this in the snapraid forum but that doesn't seem to get much traffic.


    So I tried to run a sync and it just fails with I/O errors that were limited to a single disk. So I removed the disk, put it in a dock on my desktop, added a new disk to the server and named it with the same disk label then copied all of the files from the removed disk to the server over NFS. I run MergerFS on this pool of disks so the rule is to fill the most empty disk first so it pretty much just copied everything from the removed disk to the new disk. I copied everything rather than running fix since the disk is still completely accessible a it's been a while since I synced, figured my safest bet was to just copy and resync. The files that it is complaining about haven't been touched in years, but I think if it had the chance it would error on every file of the disk. I've tested the RAM, moved the disk into a different SATA bay....same thing


    There is pretty much 0 indication that there was anything wrong with the first disk. Running sync now with the new disk throws the same errors starting with the same files, so I moved the directory that was throwing the errors outside of the array and it's still throwing errors on the next files on this disk. I had no IO errors or anything while copying these files and there seems to be no actual problem with them. I think the parity for this disk is bad or something. I also did not copy the .content file to the new disk since I wasn't sure how that would be handled.


    This was the first attempt at a sync since I moved 3 more partially filled disks into the array, hence there's a warning recommending 3 parity levels for 15 disks. These disks were already in the server, just outside of the Snapraid array......they are quite a bit smaller than the 6TB parity disks so I figured it wouldnt be an issue. What do I do to get sync running clean again?


    Syncing...

    Using 136 MiB of memory for 32 cached blocks.

    Error reading file '/srv/dev-disk-by-label-PoolDisk7/YYY' at offset 1085014016 for size 262144. Input/output error.

    Input/Output error in file '/srv/dev-disk-by-label-PoolDisk7/YYY' at position '4139'

    DANGER! Unexpected input/output read error in a data disk, it isn't possible to sync.

    Ensure that disk '/srv/dev-disk-by-label-PoolDisk7/' is sane and that file '' can be read.

    Stopping at block 51245

    Saving state to /srv/dev-disk-by-label-PoolDisk1/snapraid.content...

    Saving state to /srv/dev-disk-by-label-PoolDisk2/snapraid.content...

    Saving state to /srv/dev-disk-by-label-PoolDisk3/snapraid.content...

    Saving state to /srv/dev-disk-by-label-PoolDisk4/snapraid.content...

    Saving state to /srv/dev-disk-by-label-PoolDisk5/snapraid.content...

    Saving state to /srv/dev-disk-by-label-PoolDisk6/snapraid.content...

    Saving state to /srv/dev-disk-by-label-PoolDisk7/snapraid.content...

    Saving state to /srv/dev-disk-by-label-PoolDisk8/snapraid.content...

    Saving state to /srv/dev-disk-by-label-PoolDisk9/snapraid.content...

    Saving state to /srv/dev-disk-by-label-PoolDisk10/snapraid.content...

    Saving state to /srv/dev-disk-by-label-NasDisk1/snapraid.content...

    Saving state to /srv/dev-disk-by-label-NasDisk2/snapraid.content...

    Saving state to /srv/dev-disk-by-label-SSD1/snapraid.content...

    Saving state to /srv/dev-disk-by-label-NVME2/snapraid.content...

    Saving state to /srv/dev-disk-by-label-PoolDisk11/snapraid.content...

    Verifying /srv/dev-disk-by-label-PoolDisk1/snapraid.content...

    Verifying /srv/dev-disk-by-label-PoolDisk2/snapraid.content...

    Verifying /srv/dev-disk-by-label-PoolDisk3/snapraid.content...

    Verifying /srv/dev-disk-by-label-PoolDisk4/snapraid.content...

    Verifying /srv/dev-disk-by-label-PoolDisk5/snapraid.content...

    Verifying /srv/dev-disk-by-label-PoolDisk6/snapraid.content...

    Verifying /srv/dev-disk-by-label-PoolDisk7/snapraid.content...

    Error reopening the temporary content file '/srv/dev-disk-by-label-PoolDisk7/snapraid.content.tmp'. No such file or directory.

    Think I figured it out. I edited Fstab to remove all entries for the bad disk from the terminal in emergency mode. It still complained a bit but on reboot it started into OMV with GUI. I slapped a 4TB disk in place of the bad 3TB, formatted it and gave it the same label. Running snapraid fix (from GUI) is slowly filling the fresh disk so I assume it's putting all the data from the last sync back on that disk.

    It looks like I have a disk that's failing. The disk is part of a MergerFS pool and it's shared over NFS. I also run snapraid on this pool and just sync'd recently. Problem is without that disk coming up properly OMV starts in emergency mode and the web GUI never starts. How do I get past that disk not allowing OMV to start?


    This is the first failed disk I've encountered with OMV. Not sure right now how to rebuild the array once I get into OMV. I don't have any 3TB disks on hand to replace it with at the moment, I'm thinking either I'll swap 2, 2TB disks in or just leave it out until I pick up a couple 6TB disks. Any pointers on that? Right now I have 1 6TB parity disk as that's the largest data disk size in the array.

    Trying a few things out here to get my OMV5 nas performance upto snuff. I really like/need mergerfs but the overhead definately bottlenecks the main share using it. Even on 10gbe the mergerfs/nfs share is stuck at around 50mb/s write with a "sync" mount and it's too bursty, but not really any faster with "async" mount. After looking around a bit I decided to give cachefs a try, but I ran into an issue:


    I've mostly followed this https://blog.frehi.be/2019/01/03/fs-cache-for-nfs-clients/ but I get an error trying to start the service.

    Code
    ~# systemctl start cachefilesd
    Job for cachefilesd.service failed because the control process exited with error code.
    See "systemctl status cachefilesd.service" and "journalctl -xe" for details.
    root@openmediavault:~#



    I don't see anything worth looking at in journalctl. The tutorial mentions a Debian bug that is supposed to be fixed by editing the /etc/cachefilesd.conf and commenting out this line:

    Code
    #secctx system_u:system_r:cachefiles_kernel_t:s0

    But it's already commented out in the Debian package.


    The server side fstab mount for the cache ssd is

    Code
    # fscache
    UUID=a86f82b3-b4e8-49df-9162-532f7bb108d4 /var/cache/fscache ext4 rw,acl,user_xattr 0 2


    I read elsewhere that user_xattr is important for this but I'm not sure it matters for getting the service to start. Any ideas?

    So I started upgrading things and ran into some issues mostly related to the mini-itx form factor. For example, I made the assumption that by now SATA port multipliers were well supported but this still doesn't really seem to be the case. The native SATA ports on the mobo will support a single channel with a port multiplier (the 3 hdd enclosures each have a sata 1x4 port multiplier or USB3 hub internally). So only a single box would be reconized with all of it's drives on E-Sata. So 1 box is currently ESata, the other 2 on USB3. I've used them on USB3 for a time now but using SATA drives in a Sata-USB3 converter seems like a step that incurs overhead so I ordered a 4-port Esata card that supposedly supports port multipliers on all 4 portsl. Being that I have a 10gbe card in the only PCI-E slot, I ordered a M.2 to PCI-E gen3x4 adapter and hopefully that'll work out.


    I upgraded the existing OMV4 install to OMV5, ran into a couple of issues that were more likely on the Debian side so I fresh installed OMV5 on a thumb drive. It's pretty much all set up now and what I notice:


    If I mount the 10gbe shares from the client with the "sync" option, and transfer about 10gb over 2 files I'm still stuck at around 40-50 MiB/s writes to the server. If I mount as "async" I will get a huge spike to 1GiB/s that settles out at around 150 MiB/s. I haven't done any caching anything because I haven't figured any of that out yet.


    I'm not terribly concerned with read performance of spinning rust, If I need a fast read/write I can get a SSD share going separately as above. What I really want to improve is writes to the server. Basically I want to get the client out of the transfer as quickly as I can so that denotes some sort of write caching. I was looking into tempfs thing and wondered if this is the answer. I'm making a bunch of assumptions that tempfs is doing some writecaching. I checked and the tempfs service is running on the stock install with no specified size so I'm not sure how that's allocated. I only installed 4gb of ram ATM, and "$ df" says that tempfs is using 12% @ 396mb so that sounds like that's inline with using 12% of available ram. I can throw more RAM in, if OMV is going to use it but the previous setup never used much of the 16gb I had installed either. I also have swap turned off using the flashmemory plugin. Now I'm thinking I can use some of the NVME as a swap partition for if tempfs runs out of memory. So I think my main question is, will adding more RAM or NVME swapspace be able to be configured in a way with tempfs utilizing it?


    A side question, since I'm limited on PCI-E slots with the mini-itx form factor to 1 x16 slot and I'll be occupying the M.2 pciex4 slot with the adapter/ESata card, I'm thinking about picking up one of these to get a 10gbe nic and 2 NVME slots in one card:

    https://www.synology.com/en-us/products/E10M20-T1

    It mentions using NVME as "cache" but I assume that is done on the OS side of the Synology boxes. I figure it actually just presents the NVMEs and 10gb NIC to the PCIE bus separately but all in one card. Anyone have any experience with this card?

    I'm currently using an odroid H2 x86 SBC to run OMV with 12 hdds in 3 4-drive usb3 enclosures. The current OS drive is 128gb NVME. I run MergerFS and Snapraid


    The change is to repurposed a mini-itx motherboard and AM4 CPU. The same 3, 4-bay enclosures will use E-sata instead of USB3 as I'll have the free native SATA ports and I find E-sata to be a bit faster and more reliable as long as it actually works. I'll be adding a 10gbe nic and moving the OS to a USB3 flash drive since the NVME as a system drive is kind of a waste.


    question #1 is that I plan to shrink the partitions on the drive and image them to the USB stick. That typically works in Linux but if it doesn't is there any way to transfer the entire OMV configuration to a new install?


    question #2 is I'll have a 128gb NVME not really doing anything and I don't need it for any storage. I've never found a good answer if there was a simple way to set up some kind of cache drive. The OS is and data drives are ext4. I'm not changing anything with the data/parity drives (they'll remain ext4). I've heard of bcache and ZFS but I'm not sure how that works and Don't know if any of that helps me with ext4 drives.


    question #3 Is there any use for more than 4gb of ram in a NAS setup? I have 16gb of Ram on the current setup but those are sodimms. I have a 4gb dimm laying around that I'll likely slap in for this. I don't think I've ever seen the current setup use more than 10% of ram. I suppose that could change with the 10gb nic or whatever cache looks like but barring that is there any reason for more than 4gb ram. I don't think I paid a whole lot of attention to what the memory usage was for a snapraid sync but I don't think it was that much.

    Yeah, I did not think it was actually related to snapraid but it's a bit difficult to pin down. When I first set up the pool and tested it, I added 3 fresh 6tb drives to the server. I set up 1 Parity and 2 data+content as the MergerFS pool. With that 12tb pool I migrated data into it from existing drives and once they were copied over I added each drive to the pool. All of that data I moved within the server by SSH'ing in and using MC to move from the old disk to the pool (not over network). I then tested the pool by transferring a few files over the network and it seemed fine, but that was a pool of 2 drives.


    Since then I added 7 more disks and sync'd snapraid. Now there seems to be some large (mergerFS?) overhead when using async transfers. KDE/Dolphin likes to choke when doing an async transfer but since they used to be quick regaurdless it's not something that really bothered me. Now that there seems to be some major overhead my desktop gets a bit unstable while making the transfer.


    If sync is set in fstab, it doesn't bother the desktop to make an NFS transfer at all, but now the speed is down to ~35MiB/s. I hdparm -t'd a couple of the disks in the pool and the write speeds are fine @ ~150-200MiB/s. I can transfer files to non-mergerfs/snapraid shares on different disks, same server and they run at fine speeds too....~130MiB/s with async.


    So I guess I didn't realize MergerFS was going to introduce that much overhead. I don't know if there is a way to mitigate it at all, I haven't actually figured out a way to make OMV actually utilize any of the systems hardware other than the CPU for file transfers. There is 16gb of RAM that barely gets used and an NVME with plenty of space outside of the system partitions. The CPU is a modest J4105 but that hasn't seemed to have been an issue and I've been running the 2 gigabit nics bonded.

    I just rearranged my NFS server to go from a JBOD setup to using a mergerFS pool with snapraid (1 patiry disk, 9 data disks). I did setup MergerFS prior to snapraid and used it briefly and did not notice any performance issues with the mergerfs pool share vs my multiple disk shares, I was getting about the typical ~140MiB/s writes to the share.


    Shortly after finishing migrating all existing data into the pool I ran a snapraid sync-h through the web interface. It took a very long time, about 2-3 days total. For whatever reason, now when I write to the share I'm getting ~40-50MiB/s. Watching the activity light of the disk in the server, it seems to run in bursts, rather than continuous as it used to.


    I'm a little confused because I thought Snapraid was pretty much passive unless it was running a sync or something, so I'm not sure if I have a messed up setting or something. I'm using these mount options in fstab (as I always have):

    192.168.1.105:/MFS_Share_1 /nfs/MFS_Share_1 nfs4 rsize=131072,wsize=131072,_netdev,async,auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0


    I have a bit of an issue with the file manager becoming unresponsive during transfers due to the async option, but this has always been the case and writes were ~150MiB/s previously, I have another post about that. I'm trying to figure out if snapraid or whatever else might be causing me these issues.



    This has been a minor annoyance for a bit but since I've been moving larger files lately it's become a bigger deal. While writing files to my NFS share the file manager (Dolphin on Manjaro KDE) becomes pretty unresponsive while the transfer is occurring. Looking at my fstab file I noticed that I am using the "async" option, I can't recall why but I had a feeling it had to do with transfer speed. I set the mount option back to sync and Dolphin no longer slows down at all, but transfers are down from ~140MiB/s to ~35MiB/s.


    I don't have a problem with the asynchronous transfers, I've never really seen any file corruption and the server is battery backed. It is however an issue for the transfers to slow down to 35MiB/s and obviously Dolphin practically freezing during transfers is also a problem. Any ideas on what I can try to get around either of these issues?

    I've been running OMV on single board computers for a while now. I first started with a Rockpro64, then moved to an Odroid N2 (both ARM SBC's). I saw a decent improvement switching from the Rockpro64 to the N2, however I've started to build alot of things using the Odroid H2, which is an Intel J4105 CPU with dual gigabit NICs, an M.2 for NVME and uses SODIMMs for RAM.


    Right now I have the 2 NICs bonded (rr balanced), OS is running on a 256gb NVME, and I have 8gb ram in dual channel. I know the bonded NICs don't improve single transfer speeds but they're already there and should help with multiple transfers. I also know the OS running on NVME probably isn't improving anything over running the OS on eMMC, but NVME is actually cheaper than eMMC these days. RAM, I don't know, I'm not sure what kind of use OMV makes out of ram, I have 16gb (2x8) readily available as well if there was some improvement to be had there.


    My Storage is 3 USB3 4xHDD enclosures, it's just set up as JBOD, no raid or anything. Disks are all ext4 single partition mounted NFS shares. I'm not trying to build anything crazy, I just figure I have a few hardware options on hand an I'd like to get the most out of it. Like I could either partition the NVME or move the OS to eMMC (have a 16gb chip on hand) and use the NVME disk as some sort of cache, though from searching around I can't find anyone mention a beneficial way to use SSD cache. Any thoughts?


    The H2
    https://www.hardkernel.com/shop/odroid-h2/