I can relate with your post. While building and planning my server, I find myself balancing between my desire to build a race car, when in reality I need a tractor. My solution has been to make a list of my needs and determine strategies on needs. I ended up building two servers, one manages my most important documents; being photos, videos, and personal documentation. The data on that server sits on a ZFS RAIDZ3(reliable link a Volvo) volume that is backed up to multiple external devices. One of those devices is my media server, which implements SnapRAID(tractor). Very little data on my media server is backed up, yet it is the target for many of my devices' backups. One of the services I have installed is Plex, which uses a large database, that appreciates fast access, so I maintain all of my Docker data on a ZFS RAID10 volume(race car).
I realize this doesn't answer your query, yet I hope it satisfies some of your wonderment.
Here are a couple links to resources I found helpful:
RAID10 via ZFS Plugin
https://docs.oracle.com/cd/E26505_01/html/E37384/toc.html
https://www.45drives.com/community/articles/RAID-and-RAIDZ/
Posts by ajaja
-
-
I came up with a solution and thought I would post it here for those that are interested. I do not know enough about ZFS to understand what's going on in the plug-in interface, so I found a command line solution.
Codesudo zpool create -m /srv/Wharf Zenith mirror ata-Samsung_SSD_870_EVO_1TB_S62xXxXxXxX33D ata-Samsung_SSD_870_EVO_1TB_S62xXxXxXxX829Y mirror ata-Samsung_SSD_870_EVO_1TB_S62xXxXxXxX24V ata-Samsung_SSD_870_EVO_1TB_S62xXxXxXxX75A
The important tidbit I needed to do that failed my expectations in the past was to set a mount the point: -m /srv/Wharf. This flag allows OMV to display my volume natively, see attached images.
Code
Display Moreuser@omv:~# zpool status pool: Zenith state: ONLINE NAME STATE READ WRITE CKSUM Zenith ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-Samsung_SSD_870_EVO_1TB_S62xXxXxXxX33D ONLINE 0 0 0 ata-Samsung_SSD_870_EVO_1TB_S62xXxXxXxX29Y ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 ata-Samsung_SSD_870_EVO_1TB_S62xXxXxXxX24V ONLINE 0 2 0 ata-Samsung_SSD_870_EVO_1TB_S62xXxXxXxX75A ONLINE 0 0 0 errors: No known data errors
-
I am starting a fresh install on my media server; heavily dependent on Docker. Would it make sense to apply this fix now prior to installing Docker?
Here is the solution to disable apparmor on the system and it worked on my test system (not harmful since apparmor package is not installed):
Codesudo mkdir -p /etc/default/grub.d echo 'GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT apparmor=0"' | sudo tee /etc/default/grub.d/apparmor.cfg sudo update-grub sudo reboot
Taken from: https://wiki.debian.org/AppArmor/HowToUse#Disable_AppArmor
-
Hello,
I have four(4) 1 TB SSD drives that I would like to use as a fast redundant database drive.
How do I create a RAID10 using the ZFS Plugin?
I understand RAID10 in ZFS is simply striping mirrored VDevs. Will I be doing this in several steps or is there an already set method?
Command Line reference:sudo zpool create NAME mirror VDEV1 VDEV2 mirror VDEV3 VDEV4
orsudo zpool create NAME mirror VDEV1 VDEV2
sudo zpool add NAME mirror VDEV3 VDEV4
I am doing a fresh install and creating this RAID will be one of my first operations, as it will hold, both my Home directories and Docker data. -
WebGUI not logging in; instead it is looping back to cleared login form.
-
-
Is the one you are missing listed somewhere? We would need the device name or the UUID.
/dev/sdi
-
Post the output of
lsblk
blkid
omv-showkey mntent
Does the drive show up?
Did you see anything useful in the data I upload it?
-
Post the output of
omv-showkey mntent
Code
Display Moreroot@omv:~# omv-showkey mntent <mntent> <uuid>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx</uuid> <fsname>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|xxxx-xxxx|/dev/xxx</fsname> <dir>/xxx/yyy/zzz</dir> <type>none|ext2|ext3|ext4|xfs|jfs|iso9660|udf|...</type> <opts></opts> <freq>0</freq> <passno>0|1|2</passno> <hidden>0|1</hidden> <usagewarnthreshold>xxx</usagewarnthreshold> <comment>xxx</comment> </mntent> <mntent> <uuid>ce674364-408a-4234-a6f7-eae30740a411</uuid> <fsname>/dev/disk/by-uuid/ffd38b91-901e-47e5-a388-e58632a7b2c9</fsname> <dir>/srv/dev-disk-by-uuid-ffd38b91-901e-47e5-a388-e58632a7b2c9</dir> <type>btrfs</type> <opts>defaults,nofail</opts> <freq>0</freq> <passno>2</passno> <hidden>0</hidden> <usagewarnthreshold>90</usagewarnthreshold> <comment>WD10TB-ffd38b91</comment> </mntent> <mntent> <uuid>e1190858-7226-4ff3-b4a1-8dc10331d04d</uuid> <fsname>/dev/disk/by-uuid/38fad4a7-7d34-4e7d-b5a3-32e0588deead</fsname> <dir>/srv/dev-disk-by-uuid-38fad4a7-7d34-4e7d-b5a3-32e0588deead</dir> <type>btrfs</type> <opts>defaults,nofail</opts> <freq>0</freq> <passno>2</passno> <hidden>0</hidden> <usagewarnthreshold>90</usagewarnthreshold> <comment>WD10TB-38fad4a7</comment> </mntent> <mntent> <uuid>54ed86d7-c208-4065-ba3c-d356a0151e50</uuid> <fsname>b6f58016-f389-4861-9cfd-7c8ad4982640</fsname> <dir>/srv/mergerfs/Store</dir> <type>fuse.mergerfs</type> <opts></opts> <freq>0</freq> <passno>0</passno> <hidden>1</hidden> <usagewarnthreshold>0</usagewarnthreshold> <comment></comment> </mntent> <mntent> <uuid>4f2cbb66-67a5-4f9f-b87c-b111e96477a0</uuid> <fsname>Zenith</fsname> <dir>/srv/Zenith</dir> <type>zfs</type> <opts>rw,relatime,xattr,noacl</opts> <freq>0</freq> <passno>0</passno> <hidden>1</hidden> <usagewarnthreshold>0</usagewarnthreshold> <comment></comment> </mntent> <mntent> <uuid>54b0352d-39fd-4df3-8248-e415567383eb</uuid> <fsname>/dev/disk/by-uuid/196879b9-0183-4a6c-a452-53a1b5cbb6c7</fsname> <dir>/srv/dev-disk-by-uuid-196879b9-0183-4a6c-a452-53a1b5cbb6c7</dir> <type>btrfs</type> <opts>defaults,nofail</opts> <freq>0</freq> <passno>2</passno> <hidden>0</hidden> <usagewarnthreshold>90</usagewarnthreshold> <comment>TOSH16TB-196879b9</comment> </mntent> <mntent> <uuid>3a261ea0-4b4d-4a90-8834-88cb984458d0</uuid> <fsname>/dev/disk/by-uuid/68d5293b-b3ee-438b-b297-45b135fc3eb0</fsname> <dir>/srv/dev-disk-by-uuid-68d5293b-b3ee-438b-b297-45b135fc3eb0</dir> <type>btrfs</type> <opts>defaults,nofail</opts> <freq>0</freq> <passno>2</passno> <hidden>0</hidden> <usagewarnthreshold>90</usagewarnthreshold> <comment>WD16TB-e5986176</comment> </mntent> <mntent> <uuid>e8909caf-ca50-4895-a5f2-66bff26128f2</uuid> <fsname>/dev/disk/by-id/ata-ST18000NM000J-2TV103_ZR555BVH-part1</fsname> <dir>/srv/dev-disk-by-id-ata-ST18000NM000J-2TV103_ZR555BVH-part1</dir> <type>btrfs</type> <opts>defaults,nofail</opts> <freq>0</freq> <passno>2</passno> <hidden>0</hidden> <usagewarnthreshold>90</usagewarnthreshold> <comment>ST18TB-cbe77959</comment> </mntent> <mntent> <uuid>16fc75e5-2641-4c63-a90c-31f32eb1e5cd</uuid> <fsname>/dev/disk/by-id/ata-WDC_WD120EMFZ-11A6JA0_9KG52WDL-part1</fsname> <dir>/srv/dev-disk-by-id-ata-WDC_WD120EMFZ-11A6JA0_9KG52WDL-part1</dir> <type>btrfs</type> <opts>defaults,nofail</opts> <freq>0</freq> <passno>2</passno> <hidden>0</hidden> <usagewarnthreshold>90</usagewarnthreshold> <comment>WD12TB-c7a7fd91</comment> </mntent> <mntent> <uuid>a9250879-dec0-41ef-bc5c-c8670c95a279</uuid> <fsname>/dev/disk/by-id/usb-WD_easystore_25FB_32594A3052303344-0:0-part1</fsname> <dir>/srv/dev-disk-by-id-usb-WD_easystore_25FB_32594A3052303344-0-0-part1</dir> <type>btrfs</type> <opts>defaults,nofail</opts> <freq>0</freq> <passno>2</passno> <hidden>0</hidden> <usagewarnthreshold>90</usagewarnthreshold> <comment>external10TB</comment> </mntent> <mntent> <uuid>7f1e6332-a511-47f9-8d6d-dfd410fbfb7c</uuid> <fsname>/dev/disk/by-uuid/5579e13f-0dd4-41dc-8dc7-1d8e9c25a51d</fsname> <dir>/srv/dev-disk-by-uuid-5579e13f-0dd4-41dc-8dc7-1d8e9c25a51d</dir> <type>btrfs</type> <opts>defaults,nofail</opts> <freq>0</freq> <passno>2</passno> <hidden>0</hidden> <usagewarnthreshold>90</usagewarnthreshold> <comment> WD12TB-5579e13f</comment> </mntent> <mntent> <uuid>c13cd88d-2eab-4c9c-bdd3-620f99cd8785</uuid> <fsname>/dev/disk/by-uuid/304f067c-222e-4818-b8f0-2834b63c1f71</fsname> <dir>/srv/dev-disk-by-uuid-304f067c-222e-4818-b8f0-2834b63c1f71</dir> <type>btrfs</type> <opts>defaults,nofail</opts> <freq>0</freq> <passno>2</passno> <hidden>0</hidden> <usagewarnthreshold>85</usagewarnthreshold> <comment>ST08TB-304f067c</comment> </mntent> <mntent> <uuid>05939064-bd37-49c5-a5c7-0ea43fc06f88</uuid> <fsname>/dev/disk/by-uuid/f540400f-2cc5-4653-8d3a-971dea3daa76</fsname> <dir>/srv/dev-disk-by-uuid-f540400f-2cc5-4653-8d3a-971dea3daa76</dir> <type>btrfs</type> <opts>defaults,nofail</opts> <freq>0</freq> <passno>2</passno> <hidden>0</hidden> <usagewarnthreshold>85</usagewarnthreshold> <comment>TOSH08TB-f540400f</comment> </mntent> <mntent> <uuid>160b7a3c-7641-42b5-93c8-e987aaffb010</uuid> <fsname>/dev/disk/by-uuid/4452839b-ceb5-47dc-83d1-b54593f5f8c9</fsname> <dir>/srv/dev-disk-by-uuid-4452839b-ceb5-47dc-83d1-b54593f5f8c9</dir> <type>btrfs</type> <opts>defaults,nofail</opts> <freq>0</freq> <passno>2</passno> <hidden>0</hidden> <usagewarnthreshold>85</usagewarnthreshold> <comment>TOSH08TB-4452839b</comment> </mntent> <mntent> <uuid>79684322-3eac-11ea-a974-63a080abab18</uuid> <fsname>/dev/disk/by-uuid/72a381c3-77f3-41e3-9f0c-498026421336</fsname> <dir>/</dir> <type>ext4</type> <opts>errors=remount-ro</opts> <freq>0</freq> <passno>1</passno> <hidden>1</hidden> </mntent> root@omv:~#
-
Post the output of
blkid
Code
Display Moreroot@omv:~# blkid /dev/sdl1: LABEL="Zenith" UUID="9323196876188285012" UUID_SUB="16174935876269510914" BLOCK_SIZE="512" TYPE="zfs_member" PARTLABEL="zfs-7916fe0d87611ebc" PARTUUID="e6cb7e6f-a2de-104b-bc7d-dbb30405bc03" /dev/sdd1: UUID="68d5293b-b3ee-438b-b297-45b135fc3eb0" UUID_SUB="178684a4-7b70-4929-b0ab-6bc71d1208fb" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="14ccdcab-7646-402c-a5f0-035570a81757" /dev/sdb1: UUID="30da0e3e-cb5c-43ef-88cd-d580e924930f" UUID_SUB="88949332-41ff-4f23-bda7-57a73393b2fa" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="7d4527fa-8e55-4647-bb9e-21d0c94964fd" /dev/sde1: LABEL="WD10TBA02YJD8UYD" UUID="ffd38b91-901e-47e5-a388-e58632a7b2c9" UUID_SUB="35e8fe5a-1ac0-4a98-b81a-8831308b1049" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="c1e6273c-c847-4171-afa3-7dc7c968041a" /dev/sdh1: LABEL="WDC10TB2YHZ3Z6D" UUID="38fad4a7-7d34-4e7d-b5a3-32e0588deead" UUID_SUB="c8365bff-6b44-45ef-ae79-30a8a7bd6207" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="b465b5fe-1baf-408d-a3d5-6e16a5a84697" /dev/sdj1: UUID="5579e13f-0dd4-41dc-8dc7-1d8e9c25a51d" UUID_SUB="4ed4aed3-47cb-4273-9a01-269be5014d70" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="40f518a9-4030-4a11-808e-a78099ee917b" /dev/sda1: UUID="d8555ad0-2203-4444-94d1-22888ae5b6b2" UUID_SUB="2745e343-e4f5-46e1-bb78-c5b125b9d96b" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="105426cd-3012-441c-910e-8f7d3cbc9433" /dev/sdg1: UUID="196879b9-0183-4a6c-a452-53a1b5cbb6c7" UUID_SUB="b20f1875-ea3a-4a78-a3c9-94445993e481" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="94eb33a1-a665-498e-8df4-1768c14e369c" /dev/sdk1: UUID="304f067c-222e-4818-b8f0-2834b63c1f71" UUID_SUB="26e343da-12f0-4f68-b449-3fb9335eb245" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="4d90427f-d332-40e6-8d89-d6315b824f47" /dev/sdc1: UUID="f540400f-2cc5-4653-8d3a-971dea3daa76" UUID_SUB="b149fb87-3f7c-4ea4-905e-4ce921bed228" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="beec12e2-0b08-46bc-86c1-c75881bb883f" /dev/sdf1: UUID="4452839b-ceb5-47dc-83d1-b54593f5f8c9" UUID_SUB="af531403-74cf-4dc1-a01c-2195a2d97088" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="7148ca75-5778-4fa7-930f-ff156e661a07" /dev/sdm1: LABEL="Zenith" UUID="9323196876188285012" UUID_SUB="17783954537649735424" BLOCK_SIZE="512" TYPE="zfs_member" PARTLABEL="zfs-2f36a35be2ea3598" PARTUUID="0e0bbbf7-ecac-3540-a525-47629e8795fb" /dev/sdn1: LABEL="Zenith" UUID="9323196876188285012" UUID_SUB="12752816909777006034" BLOCK_SIZE="512" TYPE="zfs_member" PARTLABEL="zfs-ae698de9fa4eb248" PARTUUID="9644d7db-0aa0-5f4f-9871-f81dccd6263f" /dev/sdo1: LABEL="Zenith" UUID="9323196876188285012" UUID_SUB="17043407025018494781" BLOCK_SIZE="512" TYPE="zfs_member" PARTLABEL="zfs-1dd329b760afbadb" PARTUUID="35fe100f-76d3-6342-8866-a54c9b6e00f1" /dev/sdp1: UUID="45EF-1513" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="a7955f69-fc4f-4857-9077-24d91c65c93e" /dev/sdp2: UUID="72a381c3-77f3-41e3-9f0c-498026421336" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="bc4bd863-afce-40bf-8c8a-908e72f5e0e9" /dev/sdp3: UUID="eb47b224-4dd5-4210-acf0-a912f22a7d67" TYPE="swap" PARTUUID="4ae22b01-e75b-403a-a410-0874bf2c00db" /dev/sdr1: UUID="d6866ed1-201a-45cb-8837-bbdd5aad1fff" UUID_SUB="ff1b53ae-6bda-4465-ae8d-1d66292e815d" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="21f066ad-a32b-4476-aeab-4dda9f7ebebc" /dev/sdl9: PARTUUID="aaf39754-5aa6-124b-9376-411cd230bbe1" /dev/sdm9: PARTUUID="c17a91a3-29a5-de4d-8999-8f2c02c7b946" /dev/sdn9: PARTUUID="1f5fb934-2af8-0e4c-87ff-2f3fb2dee8e2" /dev/sdo9: PARTUUID="0f46e5ed-ce0a-e74d-95ed-c9ff7ee4d73a" root@omv:~#
-
Post the output of
lsblkCode
Display Moreroot@omv:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 16.4T 0 disk └─sda1 8:1 0 16.4T 0 part /srv/dev-disk-by-id-ata-ST18000NM000J-2TV103_ZR sdb 8:16 0 10.9T 0 disk └─sdb1 8:17 0 10.9T 0 part /srv/dev-disk-by-id-ata-WDC_WD120EMFZ-11A6JA0_9 sdc 8:32 0 7.3T 0 disk └─sdc1 8:33 0 7.3T 0 part /srv/dev-disk-by-uuid-f540400f-2cc5-4653-8d3a-9 sdd 8:48 0 14.6T 0 disk └─sdd1 8:49 0 14.6T 0 part /srv/dev-disk-by-uuid-68d5293b-b3ee-438b-b297-4 sde 8:64 0 9.1T 0 disk └─sde1 8:65 0 9.1T 0 part /srv/dev-disk-by-uuid-ffd38b91-901e-47e5-a388-e sdf 8:80 0 7.3T 0 disk └─sdf1 8:81 0 7.3T 0 part /srv/dev-disk-by-uuid-4452839b-ceb5-47dc-83d1-b sdg 8:96 0 14.6T 0 disk └─sdg1 8:97 0 14.6T 0 part /srv/dev-disk-by-uuid-196879b9-0183-4a6c-a452-5 sdh 8:112 0 9.1T 0 disk └─sdh1 8:113 0 9.1T 0 part /srv/dev-disk-by-uuid-38fad4a7-7d34-4e7d-b5a3-3 sdi 8:128 0 5.5T 0 disk sdj 8:144 0 10.9T 0 disk └─sdj1 8:145 0 10.9T 0 part /srv/dev-disk-by-uuid-5579e13f-0dd4-41dc-8dc7-1 sdk 8:160 0 7.3T 0 disk └─sdk1 8:161 0 7.3T 0 part /srv/dev-disk-by-uuid-304f067c-222e-4818-b8f0-2 sdl 8:176 0 931.5G 0 disk ├─sdl1 8:177 0 931.5G 0 part └─sdl9 8:185 0 8M 0 part sdm 8:192 0 931.5G 0 disk ├─sdm1 8:193 0 931.5G 0 part └─sdm9 8:201 0 8M 0 part sdn 8:208 0 931.5G 0 disk ├─sdn1 8:209 0 931.5G 0 part └─sdn9 8:217 0 8M 0 part sdo 8:224 0 931.5G 0 disk ├─sdo1 8:225 0 931.5G 0 part └─sdo9 8:233 0 8M 0 part sdp 8:240 0 119.2G 0 disk ├─sdp1 8:241 0 512M 0 part /boot/efi ├─sdp2 8:242 0 117.8G 0 part / └─sdp3 8:243 0 977M 0 part [SWAP] sdr 65:16 0 9.1T 0 disk └─sdr1 65:17 0 9.1T 0 part root@omv:~#
-
So nothing shows up, when you click on "Select a file system ...."?
Correct.
-
-
The sharerootfs plugin is not installed. It is needed for the directory picker uses to show directories. I missed adding it as a dependency for the snapraid plugin.
Need I take action?
-
It sure seems like it is acting odd and might be causing this problem.
Are you suggesting the boot device is acting odd?
I've been thing of doing a clean install. -
128GB SSD via internal USB
What kind of media is omv installed on?
Model: USB3.0 high speed (scsi)
Disk /dev/sdo: 128GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 538MB 537MB fat32 boot, esp
2 538MB 127GB 126GB ext4
3 127GB 128GB 1024MB linux-swap(v1) swap
-
OK ryecoaaron , I added a two zeros. Still same issue.
-
root@omv:~# /etc# lsof -u openmediavault | wc -l
-bash: /etc#: No such file or directory
0
root@omv:~# /etc# lsof -u www-data | wc -l
-bash: /etc#: No such file or directory
0
root@omv:~# /etc# lsof -u root | wc -l
-bash: /etc#: No such file or directory
0
root@omv:~# /etc# lsof +D /srv/docker-apps | wc -l
-bash: /etc#: No such file or directory
0
root@omv:~# /etc# lsof +D /srv/docker-root | wc -l
-bash: /etc#: No such file or directory
0
zero
How many open files does your system have:
lsof | wc -l # may take a while to complete
I have 89812 open file handles in total
for omv:
for www-data (web server)
for root:
and for my docker apps (your path may be different)
and for docker itself (your path my be different e.g. /var/lib/docker)
Code
Display Moreroot@omv:/etc# lsof +D /srv/docker-root | wc -l lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 lsof: no pwd entry for UID 1013 792
user 1013 is a user inside a container I o not need a user in the host.
Look here for more examples:
-
It is unlikely to be an OS setting then. Maybe change rlimit_files in /etc/php/7.4/fpm/php-fpm.conf and restart php-fpm? But I'm really curious why that would be needed.
Doubled from 1024 to 2048; no go.
-
Unfortunately no difference.
As a temporary check to see if this fixes, try:
echo -e "* soft nofile unlimited\n* hard nofile unlimited" | sudo tee -a /etc/security/limits.conf
Then reboot
root@omv:~# echo -e "* soft nofile unlimited\n* hard nofile unlimited" | sudo tee -a /etc/security/limits.conf
* soft nofile unlimited
* hard nofile unlimited