Need some ZFS help or is the help need to be with Docker/Portainer, i just dont know.

  • as you can see from the last screenshot, i have 4 ZFS pools. I will explain them for context


    TEST = a single 2TB pool just for testing because this drive was taking errors and i think it was a cable problem. it will stay unused as a backup for the MEDIA pool incase of a future failure so i will have a hot disk ready to go


    MEDIA = a 6 drive (2TB) RAIDZ1 pool for media library storage and a secondary spot for a backup of important data


    BACKUPS = a 3 drive (2TB) RAIDZ1 pool for initial backups of important data from desktop from rsync jobs


    SCRATCH = a 1 drive simple/basic 256Gb SSD pool for things like downloads and handbrake conversions. This pool contains dockers and portainer images like SABZ, SONARR, RADARR and HANDBRAKE.


    I created each of these pools the exact same way. As you can see from above, only the SCRATCH pool is acting a bit odd, maybe it is supposed to be this way with docker/portainer, but i just don't know. The amount of clutter sends my OCD into overdrive seeing all the snapshots/clones/filesystems in those listings.


    now to the questions.
    1. why are there so many listing (snapshots/clones/filesystems) in the ZFS listings?
    2. why are these snapshots being made?
    3. even though they don't take up much space, it is a lot of them and i am wondering when i can delete them and possibly cut down the frequency of their creation?
    4. do i need all of these snapshots?
    5. should i turn off the snapshots, and if so how do i remove them or make them only 1 or 2 deep? I have tried to delete the snapshots from snapshots tab, the zfs clones/filesystems from the ZFS tab and both efforts show errors saying that something depends on them and they can't be deleted.


    not even sure how to look this up with google, because i have no idea if it is portainer/docker doing this (which i assume it is) but the only thing i can do in portainer is change the frequency of when they are made. I don't see anywhere in portainer on deleting them or making them really change. Just not even sure if this is how it is supposed to look/be.


    again, sorry if i post this in the wrong place, but i am not really sure where this need to belong. Please lend a hand and have a great day


    thanks,


    Jeff

  • For my opinion something is wrong with your scratch-pool. OMV has no feature to create auto-snapshots or clones automatically. It is possible to create snapshots manually by the WebUI. But for automatic snapshots other tools must bei used (e.g. znapzend).


    You should destroy the scratch pool and give it a second trial.


    Btw.: Why don´t you use /dev/disk/by-ID/xxx for your disks assignment?


    Edit: I don´t know if the docker images are responsible for. How do they "know" that they are running on ZFS?

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • what do you mean by this?


    Btw.: Why don´t you use /dev/disk/by-ID/xxx for your disks assignment?


    i just am kind of a novice at ZFS, i just created the pools with OMV all the same....i know portainer is creating these snapshots, or at least i think it is.


    i used ZFS, just because i didn't like that ETX4 was doing some things in the mountpoints for the SCRATCH drive. again, not seeing any problems but just not sure why this is there. If it helps, since i have all of my dockers configs like i want them now, no more snapshots have been made, so i think when i was making changes, it was creating the snapshots, i just want to get rid of the old ones and portainer shows no way of doing this. Trying to kill a snapshot and see if that trickles down.


    thanks again for taking a look. and my god, redoing the SCRATCH pool is going to take some serious time and would love to find a way not to have to do this...... the image shows the settings i used for all of the pools i made.

  • Btw.: Why don´t you use /dev/disk/by-ID/xxx for your disks assignment?

    I think I was mistaken. Please don´t care about it.


    This post should lead you to a solution ZFS settings - post #18 But I don´t think, you´ll be happy about it. It´s a ZFS related behavior of Docker.


    BTW: The hole thread is very interesting.


    BTW2: For better performance it is suggested to create the pool with ashift=12 if ADF drives are used.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    Einmal editiert, zuletzt von cabrio_leo ()

  • you are correct, me no likey, but it makes since. looks like i am going to need to move the Dockers and my SCRATCH layout over to a EXT4. my only question now is how do i get my ext4 drive to mount to /SCRATCH rather than /sharedfolders/SCRATCH? just looking to make it look like i want it.


    thanks again for the link. i was so confused about it that i didn't even know how to ask the question intelligently enough to get any real response without just spelling it all out in the OP.


    thanks again a ton.

  • my only question now is how do i get my ext4 drive to mount to /SCRATCH rather than /sharedfolders/SCRATCH? just looking to make it look like i want it.

    Quote @ryecoaaron: "The mount in /srv is the actual filesystem mount. The mount in /sharedfolder is a bind mount of just the sharedfolder's folder to the easy to remember path."
    Link to the whole thread: Error when deleting shared folder

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • So last question on the main topic. Should I be worried about the clones enough to move all me stuff, reformat drive to ext4 and rebuild that information. I mean is this a real problem or just something messy.


    Thanks again

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!