Posts by k567890

    Hi, I'm connecting with SSH and shutting down OMV via "shutdown -h now" command.


    Basically when UPS needs to shut system down... I have this done.


    Do you think this is a safe shutdown as far as data on my OMV is concerned or should I call another command? Perhaps you have your own custom command I should be calling to safely shutdown the system.

    OK, this prob not what u want to hear but I say don't bother. You can no longer easily have your own server. Google and other email service providers will block your emails by default because your email server address is not on a whitelist with the email mafia. Basically you have to register your email server and keep monitoring that registration....a lot of work... consider if this is worth it for u.

    In OMV, when you select a zpool and click add object... you get options:


    Fielsystem
    Volume
    Snapshot


    What the heck do you mean by Filesystem and VOlume here... the teminology doesn't make sense.


    By Volume do you mean "dataset"? What do you mean by Filesystem then? Is that "dataset"... Confusing.
    Also why do you need to specify size for volume... this shouldn't be required in zfs.

    There is something unclear here.


    Let's get to it.



    NORMALLY U USE ZFS LIKE THIS
    create pool:
    # zpool create myfirstpool mirror -o ashift=12 ata-sldkfjlsjfdsljfslkjf ata-oiurorisiofsjodifjisojfio


    Creating DATASET in a zpool:
    # zfs create myfirstpool/myfirstDS


    List pool:
    # zpool list


    we see myfirstpool


    List datasets:
    # zfs list


    we see myfirstpool/myfirstDS



    Get all zfs attributes for a specific dataset:
    # zfs get all myfirstpool/myfirstDS


    FOLLOWING YOUR WEB GUI INSTRUCTIONS IF I CHECK WHAT OMV MADE


    List pool:
    # zpool list


    we see myfirstpool


    List datasets:
    # zfs list


    we see myfirstpool (This is clue that no proper dataset is made... just some default zfs thing created the same name as zpool. zfs always makes something like this at time of zpool creation but i don;t think it is supposed to be used for data storage)


    Get all zfs attributes for a specific dataset:
    # zfs get all myfirstpool/myfirstpool


    we see error "no such dataset"


    POINT


    It seems to me that you created a zpool and did not create a dataset in it as proper practice. Instead, the plugin uses a default dataset created at time of zpool creation? I have concerns about the validity of this... in particular about data safety.


    Would be nice to have some elaboration on this.


    What you have to realize is that zfs requires datasets to be made in zpools.

    I have one UPS and its USB is connected to another system. I need to issue remote shutdown commands to multiple systems from the system with the USB. One of the systems that needs to be shut down remotely is OMV.


    So my question is, what command do you recommend to shut down OMV --GRACEFULLY-- ?
    That is to say, I don't want the drives to lose any data.


    Setup:
    OMV->Luks plugin->ZFS plugin.

    In your case, I would just use an external hard drive... and plug-n-play... until you can save up a little. For the cost of about 5 pizzas you can buy a used computer to turn into a dedicated NAS. I think you'll be much better off.

    This may seem like a stupid question but here goes...


    what the heck is the point of streaming plugins like DAAP? I just don't get it.


    With a regular SMB share, I can watch and listen to anything across the network just fine. Why do I need something like DAAP?

    Right... I know... but still if they are used in a topology... there is dependence between these plugins...


    I realize you are using Javascript and then probably PHP to issue system commands...
    The question is what commands do you issue and in what order. Of course, I'm limiting this scope to just... how did you handle the shutdown sequence...


    Does the LUKS plug-in simply ignore that the system will be shutdown and never issue close commands? Or does it just issue close commands for open(mapped) LUKS encrypted devices oblivious to the fact that a ZFS pool hasn't been unmounted yet?


    If the LUKS container is closed while a ZFS pool is still trying to write... you would have corruption (the raid and checksums won't help).

    Ohhh, just one more concern.


    The LUKS and ZFS plug in were made by seperate devs.


    I was wondering how did you manage a proper shutdown procedure when OMV uses both LUKS and ZFS.


    Are you doing something to make sure that the ZFS pool is unmounted before LUKS container is closed?


    proper OMV shutdown should be something like:
    stop all sharing services.
    flush all writes.
    unmount ZFS pool.
    close LUKS containers.
    etc...


    How did you co-ordinate this?

    Thanks guys, you're awesome!!


    Thank you so much for creating these two plugins!


    Is adding a rotating auto snapshots feature to the plugin in the realm of possibility?
    That along with scheduled scrubs would make it feature complete.


    Here is a linux project of interest.
    https://github.com/zfsonlinux/zfs-auto-snapshot

    Here are some tips about getting auto versioning in Windows (Because SMB has this built in). So SMB + ZFS allow for direct GUI version control in Windows.
    [GUIDE] Windows Previous Versions and Samba (Btrfs - Atomic COW - Volume Shadow Copy)

    OK, but you know zfs is more than a filesystem so formatting isn't the only concern. It is also the redundancy supplier...a.k.a raid... so my concern was that since u the core team did not make zfs plugin... that he may have made the zfs pool creation via the unix dev name (sda,sdb,sdc,ect)... which would work fine and dandy because zfs does support this... but then one day... years down the road... people start swapping drives and then disaster strikes because sdb becomes sdc or whatever because some USB drive was plugged in perhaps or whatever...


    So while I know ZFS pool creation command supports by_id.... few guides show that....few devs know this... I had no idea what command he used. I did see some of the dev posts and saw that it was a struggle to create that plugin... he had little time and took a while to get it to function at all. By the way... I don't mean this as a swipe at the dev... hats off to him... this is the most valuable OMV plugin along with LUKS... a huge value... so greatly appreciated to say the least.


    I was just wondering...

    ahhh... i can help you with this.


    u don\t need a domain for this. just create users and use the permissions button.


    a domain is just an alternative to IP. Kind of useless for LAN. Useful across internet because it is "cooler" to connect to a server with http://www.kickasssite.com than 45.145.156.123... be it a webserver, ftp server, or smb server.


    It just a name to replace the numbers because people can't remember numbers. I don't know why... :D Personally I don't see why saying hello John is better than saying hello 345. Sure you may say, well if we give everyone a unique number (UUID) for a name, there would be too many digits... but consider that John is also not unique. I think it would be a fine improvement in that state of things if everyone had a number for a name. :D Also, it would be more universal. I could be number 1... but you can be number 1 too... hell we can all be number 1! Some crazy person may deviate an call their kid 4562.... and a real rebel may really deviate and call their kid 8734573924375... it's all cool by me. However, if someone went crazy and called their kid PI or E, we would have to put our foot down cause we can't have a kid's name be endless, we do need some limits... child abuse and all...

    hmmm... they said the zfs plugin was stable.


    Perhaps you should give more details... what exactly was the problem? Did you try to transfer 3tb of movies at once? what r your system specs?


    With zfs, you need 8GB ram at min with deduplication turned off. RUn the command to see if deduplication is turned off. Also turn off file zfs compression.


    You should keep ZFS because everything else is primitive and defective by design... ext4 + md raid won't stop your files from corrupting... and backups won\t help either since eventually your good backup copy will be swapped out for a bad bitrotted copy as your backups cycle and you'll be left with xxxxx

    Thanks, I guess I should take that as confirmation then...


    Look, while I've joined the forum 3 years ago, I don't see how you can conclude that I should know how you've implemented internal components. The answer is not obvious from the users point of view by just using the product... and in fact suggests otherwise since the gui does not show device models or ids...quite unfriendly when selecting disks... it only makes reference to devices via unix device name (sda, sdb, etc). This is "OK"... as long as internally that is not used at all... by any plug-in.


    I'd have to investigate and set up complex tests to deduce the answer... thought it would be more efficient to just ask.
    I have searched for the answer you know...


    Anyways, hope you're right about the plug ins.

    I was wondering if these two plugins were made to use /dev/disk/by-id
    instead of the /dev/sda shinanigans that people tend to use by default.


    The reason I ask is that I am concerned that if I use your plugins and then the sda sdb sdc labels change in the system due to disk plugins and other tasks...
    my data will become lost/corrupt... or who knows what.


    Generally, it is not wise to use sda sdb for anything... instead disks should be referenced by ID... because that does not change during system admin tasks...

    Thanks guys,


    So If I install the LUKS plugin and the ZFS plugin... which should I do?


    (OPTION A)
    1 - Create ZFS Raid Mirror with ZFS fileystem?
    2 - Encrypt with LUKS? How?


    (OPTION B)
    1 - Create normal non ZFS RAID1 via webGUI
    2 - LUKS encrypt the RAID1 vol via webGUI
    3 - Open Luks encrypted volume via webGUI
    4 - Apply ZFS file system for bitrot protection
    5 - Apply some cron jobs to schedule ZFS snapshots (Not sure how)
    6 - Apply some cron jobs to schedule zfs scrubs


    (OPTION C)
    1 - LUKS encrypt drives via webGUI
    2 - Open LUKS drives via webGUI
    3 - Use ZFS plugin to create a ZFS mirrored zfs pool
    4 - Apply some cron jobs to schedule ZFS snapshots (Not sure how)
    5 - Apply some cron jobs to schedule zfs scrubs



    Not sure how option A is possible if I choose to make zfs type raid.


    Also, are zfs snapshots possible as I've stated in b.5 ?


    Also, what does the ZFS plugin default enable zfs compression?

    I would like to know:


    (1) is the ZFS plugin now stable? The posts I saw made it look like it was barely working...
    (2) does the ZFS plugin support zfs snapshots via the GUI?
    (3) how can one use ZFS with encryption with OMV?