Beiträge von z06frank

    Yeah.....I can see what I did with how files are distributed across the two 8TB HHDs I now have. Cabrio-leo's link above in post#10 gave me a perfect incite to how the various OMV UnionFS options build the "media" arrays. I did the typical noob thing of "ready-shoot-aim" before reading how the array build types vary.


    A few posts here RE: UnionFS and file distribution gave an idea to start a root file folder(s) to match the file structures on data1 onto Data2. One guys said to try this and see if using the same "Existing path, most free space" will continue to build onto Data2 if Data1 reaches the set limit. Can anyone verify this????


    I guess plan B would be to restructure my files to put Video on Data1 and everything else on Data2 as the root file structures. I'm really thinking long term and I don't want to change the array to "most free space" and randomly put files across the 2 drives for rebuilding if a drive fails. Tell me if I'm wrong in my thought process here!


    Root file structures from WInSCP looks like this attachment (obviously the Movies video folder is the large file data hog).....thanks

    I know this is an old thread but my question relates similarly (besides updated software). I have OMV 5.6.2-1 on the latest Armbian 21.02.3 Buster with Linux 5.10.21-rockchip64. Built the system a few weeks back and loaded all media from various HHD Lan disks onto new server.


    Current pertinent file structure from df -h:


    /dev/sda1 137G 4.7G 130G 4% / (OS System on SSD)

    /dev/sda2 90G 952M 88G 2% /data (2nd Partition on SSD)


    /dev/sdb1 7.3T 5.6T 1.8T 77% /srv/dev-disk-by-uuid (Parity disk)

    /dev/sdd1 7.3T 6.6T 748G 90% /srv/dev-disk-by-uuid (Data1 disk)

    /dev/sdc1 7.3T 674G 6.7T 10% /srv/dev-disk-by-uuid (Data2 disk)

    media:d 15T 7.3T 7.4T 50% /srv/ (Union FS merger)


    Snapraid sync, diff and scrub working fine with no errors. As you can see from this that nearly all data from media is on "sdc" and not splitting between ssd1 & sdc1. I've set up Union FS this way.....which I thought was right to share all "media" data across the 2 - 8TB drives.



    Very Noob questions

    (1) Even thought I downloaded the Merger FS folder plugin I did NOT do anything with this this add-on. (i.e. I've not "added" any drives)...do I need to and if so....how?

    (2) Can or should I delete (start over) on all Snapraid info and rebuilt the right way (hopefully without deleting data)?


    Thanks a bunch from a new OMV user.....

    That seemed to help resolve the constant bombardment of DHCP requests. I also made a change in my Asus Rt-AX88U router and turned off "Enable Router Advertisement" under IPV6 settings.


    In seeing people talk about enabling IPV6 in software and routers....the complication level goes up exponentially....8| I agreeing

    Thanks for your help crashtest!


    Enable (Asus Rt-AX88U Router Advertisement

    Are you saying the OMV static ip address (129.168.1.115) I assigned during set up are not addressed network lan? Here is my ifconfig from the OMV (GryzNAS). Note::


    root@GryzNAS:~# ifconfig

    docker0:

    flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500

    inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255

    inet6 fe80::42:95ff:fe84:51f2 prefixlen 64 scopeid 0x20<link>

    ether 02:42:95:84:51:f2 txqueuelen 0 (Ethernet)

    RX packets 0 bytes 0 (0.0 B)

    RX errors 0 dropped 0 overruns 0 frame 0

    TX packets 1580 bytes 398827 (389.4 KiB)

    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


    eth0:

    flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500

    inet 192.168.1.115 netmask 255.255.255.0 broadcast 192.168.1.255

    inet6 fe80::6662:66ff:fed0:984 prefixlen 64 scopeid 0x20<link>

    inet6 2601:246:cc00:a00:b2cd:7291:62a4:63d8 prefixlen 64 scopeid 0x0<global>

    inet6 2601:246:cc00:a00::107 prefixlen 128 scopeid 0x0<global>

    inet6 2601:246:cc00:a00:6662:66ff:fed0:984 prefixlen 64 scopeid 0x0<global>

    ether 64:62:66:d0:09:84 txqueuelen 1000 (Ethernet)

    RX packets 45681728 bytes 29074191714 (27.0 GiB)

    RX errors 0 dropped 515686 overruns 0 frame 0

    TX packets 49102046 bytes 72017111902 (67.0 GiB)

    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    device interrupt 27


    lo:

    flags=73<UP,LOOPBACK,RUNNING> mtu 65536

    inet 127.0.0.1 netmask 255.0.0.0

    inet6 ::1 prefixlen 128 scopeid 0x10<host>

    loop txqueuelen 1000 (Local Loopback)

    RX packets 194187 bytes 83526694 (79.6 MiB)

    RX errors 0 dropped 0 overruns 0 frame 0

    TX packets 194187 bytes 83526694 (79.6 MiB)

    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


    veth3bc86dc:

    flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500

    inet6 fe80::ec4f:93ff:fece:97dd prefixlen 64 scopeid 0x20<link>

    ether ee:4f:93:ce:97:dd txqueuelen 0 (Ethernet)

    RX packets 0 bytes 0 (0.0 B)

    RX errors 0 dropped 0 overruns 0 frame 0

    TX packets 1604 bytes 404480 (395.0 KiB)

    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0



    Should I set up a new vlan static IP to eth0? And if so, assign this IPV4 as DHCP or create a new static IP to this vlan?

    My OMV (version 5.5.21-1 Usul) is an Armbian Helios 64 NAS on 5.9.14-rockchip64. I have a raspberry pi3B running latest core version 5.2.3. with working Unbound pointing IPV4 to 127.0.0.1#5053 only. Static IP of the pihole is 192.168.1.103; static IP of the Helios 64 OMV is 192.168.1.115. My OMV hostname is:GryzNAS; router is a Asus RT-AX88U running asuswrt-merlin with gateway IP:192.168.1.1


    Key point here is the pihole is setting the DHCP (router DHCP turned off). Pihole is not having any issues assigned DHCP to other clients (or other statics IPs assigned around the pihole/DNS server.....except for the OMV.


    Current network settings of the OMV are:


    My issue is I think is I need to open the several of OMVs ports across the lan as the pihole is sending dnsmasq-dhcp constantly and I think the OMV cannot answer due to closed ports.


    I ran this CLI nmap:


    pi@pihole: $ sudo nmap -sU -p67,80,443 --script dhcp-discover 192.168.1.115

    Starting Nmap 7.70 ( https://nmap.org ) at 2021-01-15 16:22 CST

    Nmap scan report for GryzNAS (192.168.1.115)

    Host is up (0.00063s latency).


    PORT STATE SERVICE

    67/udp closed dhcps

    80/udp closed http

    443/udp closed https

    MAC Address: 64:62:66:D0:09:84 (GryzNAS)


    Nmap done: 1 IP address (1 host up) scanned in 2.18 seconds


    The tail output from the pihole.log using this CLI command:

    The query that dnsmasq-DHCP is constantly running from $ grep 'dnsmasq-dhcp' /var/log/pihole.log


    Jan 15 13:24:14 dnsmasq-dhcp[1445]: DHCPINFORMATION-REQUEST(eth0) 00:02:00:00:ab:11:05:4b:fd:af:a4:b8:a1:76 GryzNAS

    Jan 15 13:24:17 dnsmasq-dhcp[1445]: RTR-ADVERT(eth0) 2601:246:cc00:a00::

    Jan 15 13:24:17 dnsmasq-dhcp[1445]: DHCPSOLICIT(eth0) 00:02:00:00:ab:11:05:4b:fd:af:a4:b8:a1:76

    Jan 15 13:24:17 dnsmasq-dhcp[1445]: DHCPREPLY(eth0) 2601:246:cc00:a00::107 00:02:00:00:ab:11:05:4b:fd:af:a4:b8:a1: 76 GryzNAS

    Jan 15 13:24:17 dnsmasq-dhcp[1445]: RTR-SOLICIT(eth0) 38:18:4c:0a:59:22

    Jan 15 13:24:17 dnsmasq-dhcp[1445]: RTR-ADVERT(eth0) 2601:246:cc00:a00::

    Jan 15 13:24:21 dnsmasq-dhcp[1445]: DHCPINFORMATION-REQUEST(eth0) 00:02:00:00:ab:11:05:4b:fd:af:a4:b8:a1:76 GryzNAS

    Jan 15 13:24:28 dnsmasq-dhcp[1445]: DHCPINFORMATION-REQUEST(eth0) 00:02:00:00:ab:11:05:4b:fd:af:a4:b8:a1:76 GryzNAS

    Jan 15 13:24:36 dnsmasq-dhcp[1445]: DHCPINFORMATION-REQUEST(eth0) 00:02:00:00:ab:11:05:4b:fd:af:a4:b8:a1:76 GryzNAS

    Jan 15 13:24:36 dnsmasq-dhcp[1445]: RTR-SOLICIT(eth0) 04:5d:4b:86:d2:42

    Jan 15 13:24:36 dnsmasq-dhcp[1445]: RTR-ADVERT(eth0) 2601:246:cc00:a00::

    Jan 15 13:24:43 dnsmasq-dhcp[1445]: DHCPINFORMATION-REQUEST(eth0) 00:02:00:00:ab:11:05:4b:fd:af:a4:b8:a1:76 GryzNAS


    This query is constantly running and taking up the pihole.log and wearing on the SD card of the rpi3b pihole.


    How do I set up a new network interface to open the same ports as you show in the docker set up. But I obviously not using docker so I don't think this would be a vlan? I was looking here for help (downloadable PDF): [How To] OMV4 - Install Pi-Hole in Docker: Update 01/27/20 - Adding Unbound, a Recursive DNS Server


    Thanks......

    I've finished a new build of a Helios 64 NAS running latest kernel (5.9.14 rockchip 64) and OMV 5.5.20-1 (Usul). The OS in on a Debian/Armbian build and OMV are all on a M2 SATA drive formatted to ext4 as one partition.


    I did a CLI: dh -h to view the /dev/sda1 drive (Primary OS drive) and I've used about 2.6 GB of 229GB available...so I have plenty of space to partition this drive.


    My question is SHOULD I partition the drive and put all Docker containers in this new partition for best practices? I read that I should use GParted Live via USB to partition the /dev/sda1 drive if I've already got my OS on it. Right?????


    I'm somewhat new to Docker so forgive the noob question here.....thanks:)

    Thanks......good and easier thought on reducing the reserved space for the parity disk. Will use tune2fs to accomplish this.8)

    Hi all,


    First of all, I am solid with raspberry pi, pihole, & Debian Linux. I'm in the process of a new NAS build using the new Helios64 (5.9.11-rockchip64) + latest OMV5.5.17-3 Usul).


    I've got OMV main install working fine,, and I'm planning on configurating SnapRaid + Mergerfs Union file system when all disks mounted.


    I have set up (installed & mounted) 2 x 8TB hdd WD Red Plus' (CMR) to be data disks.

    I'm looking to set up another 8TB WD Red (SMR) as my one parity hdd.


    I read somewhere that if you have same size disks as the parity disk you should partition the data disks "slightly" smaller. Should I do this or am I being overly cautious (I'm an mechanical engineer and we're trained to be this way!)