Hi,
I sat and configure my omv yesterday for probably an hour, its no longer crashing
, so we can ignore that issue!
However, i added another 5x disks and setup as raid 5, created a shared folder, and created an nfs share.
however, the bind mount does not seem to work?
root@pgnas:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
└─md126 9:126 0 7T 0 raid5 /srv/dev-disk-by-label-ssdraid5
sdb 8:16 0 1.8T 0 disk
└─md126 9:126 0 7T 0 raid5 /srv/dev-disk-by-label-ssdraid5
sdc 8:32 0 1.8T 0 disk
└─md126 9:126 0 7T 0 raid5 /srv/dev-disk-by-label-ssdraid5
sdd 8:48 0 1.8T 0 disk
└─md126 9:126 0 7T 0 raid5 /srv/dev-disk-by-label-ssdraid5
sde 8:64 0 1.8T 0 disk
└─md126 9:126 0 7T 0 raid5 /srv/dev-disk-by-label-ssdraid5
Alles anzeigen
md126 is the new mdraid
root@pgnas:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 26M 3.2G 1% /run
/dev/mapper/pgnas--vg-root 202G 5.9G 186G 4% /
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
tmpfs 16G 0 16G 0% /tmp
/dev/nvme0n1p2 237M 68M 157M 31% /boot
/dev/nvme0n1p1 511M 132K 511M 1% /boot/efi
/dev/md127 8.2T 7.5T 699G 92% /srv/dev-disk-by-label-pgnas2
tmpfs 3.2G 0 3.2G 0% /run/user/0
/dev/md126 7.0T 53G 7.0T 1% /srv/dev-disk-by-label-ssdraid5
Alles anzeigen
its mounted in /srv/dev-disk-by-label-ssdraid5
and i can see my files :
root@pgnas:~# ls /srv/dev-disk-by-label-ssdraid5/
utils stuff trash
in /etc/exports :
root@pgnas:~# cat /etc/exports
# This configuration file is auto-generated.
# WARNING: Do not edit this file, your changes will be lost.
#
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
/export/pgnas 192.168.0.0/24(fsid=1,rw,subtree_check,insecure)
/export/zfs 192.168.0.0/24(fsid=2,rw,subtree_check,insecure)
# NFSv4 - pseudo filesystem root
/export 192.168.0.0/24(ro,fsid=0,root_squash,no_subtree_check,hide)
my fstab :
root@pgnas:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/pgnas--vg-root / ext4 errors=remount-ro 0 1
# /boot was on /dev/nvme0n1p2 during installation
UUID=7d08fae3-6dee-4647-b4a9-0687bc46b07b /boot ext2 defaults 0 2
# /boot/efi was on /dev/nvme0n1p1 during installation
UUID=E6DF-12B7 /boot/efi vfat umask=0077 0 1
/dev/mapper/pgnas--vg-swap_1 none swap sw 0 0
tmpfs /tmp tmpfs defaults 0 0
# >>> [openmediavault]
/dev/disk/by-label/pgnas2 /srv/dev-disk-by-label-pgnas2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
/dev/disk/by-label/ssdraid5 /srv/dev-disk-by-label-ssdraid5 xfs defaults,nofail,noexec,usrquota,grpquota,discard,inode64 0 2
/srv/dev-disk-by-label-pgnas2/pgnas /export/pgnas none bind,nofail 0 0
/srv/dev-disk-by-label-ssdraid5/zfs /export/zfs none bind,nofail 0 0
# <<< [openmediavault]
Alles anzeigen
i can see that it should bind mount to /export/zfs
but if i do an ls /export/zfs i cannot see any files
i have manually unmounted & mounted the /export/zfs directory, but its still blank.
can anyone help with this?
Br
Patric