How to install Debian GNU Linux to a Native ZFS Root Filesystem

  • Hi!... i want to go a step further and i intend to make this install Debian GNU Linux to a Native ZFS Root Filesystem to finally install OpenMediaVault...


    Why all this much work if i can have OpenMediaVault easy way on an iso?


    Good question o can say so...


    The anwser is quite simple ... I want to have ZFS implemented from root on the operating system and not has an extra plugin...


    Its sad that such a grate combination of software that it is OpenMediaVault it doesn't come with ZFS supported by default on the root of debian iso that it is built it by OpenMediaVault by it is debian issue not really assured about it...


    But certainly i would like to see that on a near future by default...


    But any way i'n not in here to complain... I love this project and i'm here to learn more... So for it i need the help of more experient dudes than me...


    I have in mind to make a OpenMediaVault server from scrath with Native ZFS Root Filesystem on debian then install OpenMediaVault from scrath also on real hardware,but so far i'm trying real hard on virtualbox to understand really how to setup everything ...


    So i found a few useful links the first one is this : https://github.com/zfsonlinux/…ative-ZFS-Root-Filesystem
    https://github.com/zfsonlinux/…ative-ZFS-Root-Filesystem
    But i got tied in on second step ... So i need help because i don't know how to Disk Partitioning because i'm not understanding how to do it proprely on the comandline then assign simboly links to those uuid disks to carry on it ZFS install ...


    So is there anyone that can explain this one more pratical way ?


    The link for OpenMediaVault from scrath i got this one : [/url]Howto install OpenMediaVault on Debian 7.x (Wheezy) i don't know if there is a best way because this miss a few setps like not being able to add the keyring ...


    Also need help how to make or assign SSD Cache ...


    So dudes i really need help because on the end of the summer of 2016 i have to ready to install this on this hardware config :


    Case : SilverStone SST-DS380B


    Motherboard : Asrock Rack C2750D4I


    RAM : Hynix 64GB 4x16GB PC3L-10600R 2Rx4 DDR3 1333Mhz ECC Server Memory Registered REG


    Power Supply : Silverstone SST-ST4SF-G 450W


    Boot SSD : Kingston SSDNow KC380 60 GB


    Cache SSD : SAMSUNG 850 PRO MZ-7KE256BW - 256 GB


    Storage HDD's : WD Red 6TB WD60EFRX - 8X HDD or Seagate 6TB Enterprise NAS 7200rpm 128MB SATA III 3.5 - ST6000VN0001 on ZFS3


    My Native Language is Portuguese sorry if i miss writen anything ... SO i need help either in Portuguese or English which i can understand also...


    But if anyone could do a video serie explaining the steps would be better...


    So can anyone help me ? And brake this down to more simple any to understand with real examples?


    Most likely my problem is a lack of knowledge or my understanding of the exemples or both ... So please explain me this better : https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Debian-GNU-Linux-to-a-Native-ZFS-Root-Filesystem
    [url='https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Debian-GNU-Linux-to-a-Native-ZFS-Root-Filesystem']


    Also need help for link aggregation for the 1GBit ethernet ports...


    Sorry for the long text and thank you for your patience and for any help at all ...



    Thank you all...

  • i just did it on virtualbox but i can assign zfs simbolic links to the file system because i don't know how partition for zfs and don't know the command to check uuid or if the is a more simple way...
    So i did a normal debootstrap...

    • Offizieller Beitrag

    well zfs doesn't use fstab, so the uuid's are pointless here i assume. Anyway command for uuid is always blkid.
    And the partitioning looks standard to me, difference is they make the point if you want to have a boot partition or use the boot data inside the zfs. You have to change certain boot parameters there.


    They also mark important to have /boot/grub as a small partition.


    This place has a systemrescuecd with zfs modules already included apparently, i would start from there. All in a VM of course, then you can clone the disk if you want.


    http://www.funtoo.org/ZFS_Install_Guide
    http://ftp.osuosl.org/pub/funt…esccd-4.5.2_zfs_0.6.4.iso


    More documentation, this time for ubuntu, that might help


    https://github.com/zfsonlinux/…ative-ZFS-Root-Filesystem


    EDIT: using systemrescuecd would not be a good idea, since the dkms would get build against that kernel (systemrcd), so you need a live up to date live wheezy.

    • Offizieller Beitrag

    You don't need a debian iso to rescue for zfs. sabayon and pentoo are about the only rescue disks I can think of.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • It's Simple ZFS File System has built in error correct and it's aware of the disk and the files and if something goes wrong with it it will corrrect it and if the hard drive is dying it will inform me about it...


    Above all i want a bulletproof network attached storage because i have 5TB of critical files that i want to share 24/7 across 3 houses , my house my parents house and the house of a friend and also because i want to do video surveillance 24/7 of 2 houses , my house and the house of my parents and i want to encrypt the traffic truth openvpn and i don't want the system being down because a bad update for example ...


    I know that clonezilla does wonders but i want that extra security...


    And because i don't want to install the extra package i just want to install the things that i'm really going to use on it ...

    • Offizieller Beitrag

    I think you will have more issues with updates running a non-standard OMV config than you would running a raid 1 array for the OS drive. The OS drive really doesn't do much. I understand zfs on data drives but not on the OS drive. smart tests should tell you when a drive is about to fail. Error correction is probably not an issue on the 1 or 2 gb of files on the OS drive.


    In my opinion, if you are that worried, use a hardware raid controller with error correction and battery backup. That is more bulletproof than zfs on linux for the boot drive.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • hi Thundersun


    (cool name btw)


    In my opinion, for the level of resilience you want, you need a complete second system. And once you have that, in my opinion, you then do not need to be so concerned about the OS


    Two systems doing identical work both collecting all data when both are running correctly, and arranged so that either system will carry on collecting all data if the other goes down.


    As a good second best, two systems, collecting different data, but with cameras overlapping so that either system on its own has useful part coverage, ad rsyncing every NN minutes, gets close. Make sure the systems are in different houses, and that the camera watching system 1 records to system 2, and vice versa. This covers deliberate sabbing intended to disable the system before further attack on the premises. Try not to let it be obvious that they record on each others systems, and only people you REALLY trust get to know about the data crossover. Then if one system is physically stolen, you have a complete dataset up to the lasr rsync, plus a partial dataset since then, that hopefully contains images of the bad guys compromising the missing server.


    Worrying about redundancy of the OS before thinking about redundancy for power supplies and for the boxes themselves is, in my respectful opinion, a mistake. And theft might be a risk for reasons of avarice, or by a thief wanting to compromise your surveillance. Two systems, in separate houses, much safer.


    I'd put a raid 5 or 6 in each box, but each box would have only a single disk for the OS.


    That way you cover points of failure that ZFS cannot reach: like if a PSU dies, or a whole box gets stolen, and so on.


    Normally these days I would not suggest raid 5. On a stndalone raid 5 system there is too big a chance that a second drive goes down while you are still rebuilding, and you lose all your data. However, with complete system redundancy, you can lose any three disks across both servers and still have retrievable data. AND with four disks lost, there is only one combo that causes data loss (two from each server). That would be good enough for me, but if you are even more cautious, go for raid 6 on each server.


    (Notice that raid 6 with four disks has the same data capacity as raid 10 with four disks, almost the same speed as raid 10, but a lower risk of losing everything -- raid 10 if you lose both disks of the same stripe, you lose all, but raid 6 you can lose ANY two).


    In my opinion, ZFS is good tech, and a very good solution to some requirements, but not to the particular requiremnts you describe in this thread.


    Regards,
    R~~


    Edit:


    PS - I have only just noticed the date of the OP. Apols to anyone upset by my answering an 11month old Q.

    3 Mal editiert, zuletzt von trueriver () aus folgendem Grund: to add PS

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!