Posts by jodumont

    Hi;


    I played a little with OMV 5 and saw Docker is by default.


    I always prefer deploying application via unix socket instead of TCP socket, which improve performance and security but I'm also for having docker to keep up to date application.


    My questions are:
    1. Would you use NGINX and Mysql directly on OMV than add Docker user inside nginx group like we do with PHP-FPM pool ?
    2. Do OMV team will keep nginx and mysql in the future or relay everything to Docker ?


    I personally like the nginx and mysql on the host because I could easily use fail2ban to protect nginx and access mysql via a unix socket.


    Share your thought, experience, hope, question...
    I'm all ears ;)

    has both moderator said; using the openmediavault iso will be a good start and/or also nothing is wrong to copy and paste; it is not for being lazy, it is to prevent from mistake.


    1. if it is because you want to partition your drive you could simply remove the preseed at the GRUB than follow the D-I (Debian Install)


    2. you should identify your hardware because saying PC is like saying I'm using a car.
    Your PC could be could be a i386
    to know what is your architecture run :

    Code
    uname -sm

    @heatblood


    if your partition is ext4 then you could reduce the size of it, simply install omv-extra then gParted or simply make a gParted USB key
    and reboot under gParted


    with gParted you will be able to reduce the size of your partition.


    another way is before the installation; when you boot on the OMV Media installation device
    you could edit the grub and remove the unattended/preseed part, by pressing TAB then lunch the installation
    this will lets you to interact with the D-I (Debian Installer) + install OMV

    while Yunohost and OMV are both strong project built on Debian and using PHP7 and complete themself perfectly; I would not recommend to install them on the same machine at the same level.
    as recommended running OMV as host and yunohost in a VM would be the smartest way unless you plan to connect OMV on the Yunohost LDAP, which in that case might be better to have Yunohost ready to when OMV will request access to the LDAP; if it is your case I would recommend you to put both into a VM under proxmox ;)

    My bad;


    my main drive was on failure in my RAID
    as soon I replace it than it again update-grub
    everything became as expected


    Debian GNU/Linux by default
    GParted and SystemRescueCD as option in the Grub

    Hi;


    I think it's worth to mention : OMV is installed on a single partition under a RAID and the grub was installed in the MBR
    Also: I just install GPartedLive is not available in the Grub menu on reboot


    My system always reboot on SystemRescueCD even if


    1. I set as default boot kernel: Debian GNU/Linux; I even try to define SystemCDRescue than redefine Debian GNU/Linux without success


    Screenshot at 2019-03-12 09-09-26.png


    2. My GRUB_DEFAULT=0 in my /etc/default/grub and I did a update-grub


    Screenshot at 2019-03-12 09-12-58.png


    3. In /boot/grub/grub.cfg Debian GNU/Linux is the first entry after


    Screenshot at 2019-03-12 09-19-28.png


    and SystemCDRescue is the last one and the only other one


    Screenshot at 2019-03-12 09-20-31.png


    So my question is : How to resolve this or How to remove/unsinstall/reinstall SystemRescueCD properly ?


    Regards!


    Jonathan

    depending how much client will connect simultaneously
    I used omv as file sharing (SMB) and as TimeMachine on a raspi2 with 2 usbdrives for 5users without issues
    except at the beginning of the TimeMachine which I had to ask user to leave their laptop over night and/or during the weekends for the 1st synchro but after that, even after two years, never eared about it and they still my client for others projects ;)

    brendangregg.com/ActiveBenchmarking/bonnie++.html

    maybe I understood wrongly but


    the bonnie++ test compare fedora running with KVM vs Solaris with container
    obviously the I/O will be faster with container.


    I'm not saying Solaris don't kick ass but to compare apple with apple, which in that case ZFS vs other FS you must isolate all others variables such as using the same OS ie: comparing a Fedora with ext4 or xfs with Fedora zfs


    https://www.phoronix.com/scan.…item=zfs_ext4_btrfs&num=5

    sadly this still persisting
    but it's not related to OMV it's more a VBox issue


    even via VBoxManage it's seams impossible to kill the vbox


    a hint to kill it is to list your VM via ps
    ps -ef|grep VBox


    than kill the process with containing the name of your vBox

    like @ryecoaaron mentionned somewhere someday


    he use OMV on Proxmox
    after reviewing HowTo install KVM on OMV
    [HowTo] Run OMV as a KVM/qemu/libvirt host


    than messing around with ZFS, than ... I will omit few details ...
    I reinstall, ...,again,... my devserver with Proxmox than, why not OMV in a unprivileged LXC.
    I have mostly container with docker in a VM and few local services (offer through LXC) who needs to have access to different data which are shared on the local networks

    My questions are;

    Where to put my data ?
    inside a virtual disk or directly on the proxmox and mapping them by the magic of LXC (which I did few test and it sound good).


    I would like to here about your setup ideally OMV on Proxmox
    1' It is via KVM or a LXC ?
    2' How to manage your data ?
    3' and your backup ?
    4' How do you share the data between OMV and others instance ?
    5' Are they mostly LXC or KVM ?
    6' If you had to rebuid what you will do differently ?

    For a general purpose and when I only have one physical machine I prefer OMV than proxmox.
    The only thing I found not optimal is the VirtualBox part (which I used and being make me happy for a very long time before I decided to switch KVM).


    So.
    If anyone decide to use KVM instead of VirtualBox on is OMV, the methodology described by @jensk still pretty accurate
    here you will find the update I made to make KVM running into OMV4.


    1. Install libvirt, kvm et cie
    apt -y install qemu-kvm bridge-utils libvirt-daemon libvirt-daemon-system


    2. Configure your Bridge interface
    NOTE1: If you want to configure the bridge via commandline, I would recommand to have no config in the WebUI under System -> Network -> Interfaces
    NOTE2: Replace ip, netmask broadcast, gateway, YOUR_INTERFACE et cie :)
    you could find these info with the command ip or ifconfig
    ip a s|grep -A3 -m1 MULTICAST


    nano /etc/network/interface


    I would recommend to reboot instead of service networking restart


    Don't forget, all the extra configuration @jensk do in his original post is worth it, I just didn't want to duplicate the info


    3. Managing the VM
    Now you could manage them with virsh (https://linux.die.net/man/1/virsh)
    But if, like me you like GUI you could use Virtual-Manager on you're desktop/laptop
    apt -y install virt-manager virt-viewer


    on the server (OMV) side you need to be sure to have a netcat (nc) accepting the -U parameters
    apt remove --purge netcat
    apt -y install netcat-openbsd


    that's it!


    Jonathan

    thanks for you reply @Sc0rp


    why this layout ?
    because
    md0 and md1 are SSD
    md2 are SATA Drive


    how would you make it ?


    Anyway I reinstall with the same layer, but define everything during the installation and everything fine now

    By experience


    if you need more I/O from your storage
    ZFS with ZIL and ZLOG will blow any competition (we speaking about 5times faster)
    don't be a fool, you could use a partition to make this cache you don't have to dedicate a drive for each of them.


    But it's also possible to use cache with LVM which might be more comfortable for you if you have more experience with LVM than ZFS.


    In any case, don't worry if the cache crash, you just loose performance.


    Plz, forget Btrfs, at least for now.

    I had this error on after installed the omv-flashmemory plugin on OMV 4 installed from the OMV.iso on a SSD


    as mentioned by Aaron in comment #14 the

    Code
    omv-aptclean
    rm /var/lib/openmediavault/dirtymodules.json


    fix my issue, at least for the moment


    so it did'nt :(


    ---- >>


    The bugging files (bg*) are empty


    I ended by adding a cron for rm -Rf /tmp/bg*

    This situation come back again


    I do rsync on USB
    but I want fail tolerance too


    Basicly I have 3 RAID1
    md0 for boot
    md1 for LVM
    md2 for data


    only md2 disappear on boot
    but it was also the only one I build throught OMV interface, not during the debian-installer


    # mdadm --detail /dev/md2


    /dev/md2:
    Version : 1.2
    Creation Time : Wed Dec 6 13:11:10 2017
    Raid Level : raid1
    Array Size : 2930135488 (2794.39 GiB 3000.46 GB)
    Used Dev Size : 2930135488 (2794.39 GiB 3000.46 GB)
    Raid Devices : 2
    Total Devices : 2
    Persistence : Superblock is persistent


    Intent Bitmap : Internal


    Update Time : Wed Dec 6 13:34:43 2017
    State : clean, resyncing
    Active Devices : 2
    Working Devices : 2
    Failed Devices : 0
    Spare Devices : 0


    Resync Status : 7% complete


    Name : ra:2 (local to host ra)
    UUID : 485973b2:2e0ec7d1:b256fb1a:1d7dca3d
    Events : 324


    Number Major Minor RaidDevice State
    0 8 48 0 active sync /dev/sdd
    1 8 32 1 active sync /dev/sdc


    # cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    ARRAY /dev/md/0 metadata=1.2 name=ra:0 UUID=b341e185:33cce37b:7b27f804:6aece78e
    ARRAY /dev/md/1 metadata=1.2 name=ra:1 UUID=86c11a9d:8a3e5db6:e72397c5:89cefae9
    ARRAY /dev/md2 metadata=1.2 name=ra:2 UUID=485973b2:2e0ec7d1:b256fb1a:1d7dca3d


    Now I'll might seems rude and it's nice to have discussion about what a backup, how to manage my data and RAID5 doom but none of this bring a solution
    So please if you want to help me and potentials others users propose solution and/or command to try.


    Thanks!


    Jonathan