Well a few days later now and everything appears to be working good so far. I did notice something this morning though after a "sync" last night. It doesn't seem to be distributing the data to the smaller drives. I thought it looked for which ever had the most free space (MFS) and put the data there but it only appears to be filling the two of the larger 4tb drives (the main one and the parity drive). I'm not sure if I set something up wrong or not.
Help me install OMV with Intel e1000e NIC
-
-
What is the output of:
cat /etc/fstab
-
root@omvserver:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sdc1 during installation
UUID=0f234674-87a9-48ac-948c-03ecdc0c0b47 / ext4 noatime,nodiratime,discard,errors=remount-ro 0 1
# swap was on /dev/sdc5 during installation
UUID=adca230b-32ec-475a-bf54-4221040f474c none swap sw 0 0
/dev/sdf1 /media/usb0 auto rw,user,noauto 0 0
/dev/sdf2 /media/usb1 auto rw,user,noauto 0 0
tmpfs /tmp tmpfs defaults 0 0
# >>> [openmediavault]
UUID=589c0de1-b7c2-4f37-ab3e-8385322858cb /media/589c0de1-b7c2-4f37-ab3e-8385322858cb ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 0 2
UUID=1d4b27f7-183c-41e5-b4ec-ae3592202954 /media/1d4b27f7-183c-41e5-b4ec-ae3592202954 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 0 2
UUID=4491dda9-9825-4aa8-9f37-a6cf2b5ea216 /media/4491dda9-9825-4aa8-9f37-a6cf2b5ea216 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 0 2
UUID=85555d0b-eccf-4279-a92b-97a983b58279 /media/85555d0b-eccf-4279-a92b-97a983b58279 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 0 2
none /media/Storage aufs br:/media/1d4b27f7-183c-41e5-b4ec-ae3592202954//=rw:/media/85555d0b-eccf-4279-a92b-97a983b58279//=rw:/media/4491dda9-9825-4aa8-9f37-a6cf2b5ea216//=rw,sum,create=mfs,udba=reval 0 0
/media/Storage /media/4491dda9-9825-4aa8-9f37-a6cf2b5ea216/pool/ none bind 0 0
# <<< [openmediavault]
root@omvserver:~# -
-
Setup is right. Not sure why aufs is not spreading across all four drives. By default, it should check approximately every 30 seconds to see which drive has the most free space. What is the output of:
mount
-
root@omvserver:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=983656,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=788652k,mode=755)
/dev/disk/by-uuid/0f234674-87a9-48ac-948c-03ecdc0c0b47 on / type ext4 (rw,noatime,nodiratime,discard,errors=remount-ro,data=ordered)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=1840660k)
tmpfs on /tmp type tmpfs (rw,relatime)
/dev/sda1 on /media/589c0de1-b7c2-4f37-ab3e-8385322858cb type ext4 (rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group,_netdev)
/dev/sdb1 on /media/1d4b27f7-183c-41e5-b4ec-ae3592202954 type ext4 (rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group,_netdev)
/dev/sdd1 on /media/4491dda9-9825-4aa8-9f37-a6cf2b5ea216 type ext4 (rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group,_netdev)
/dev/sde1 on /media/85555d0b-eccf-4279-a92b-97a983b58279 type ext4 (rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group,_netdev)
none on /media/Storage type aufs (rw,relatime,si=1527f583f0027146,create=mfs,sum)
none on /media/4491dda9-9825-4aa8-9f37-a6cf2b5ea216/pool type aufs (rw,relatime,si=1527f583f0027146,create=mfs,sum)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
root@omvserver:~# -
That looks ok too. Maybe reboot? Not sure what is going on.
-
-
Do I need to enable pool in aufs? Right now it looks like this:
And when I go into snap raid and click pool I get this:
-
About the e1000e problem you were seeing with the built-in networking on the asrock:
I have an Asrock H97M Pro4 mobo, and I determined that the PCI ID returned by the card is not 1559, it's 15A1. Turns out some distributions have a new enough driver, some don't. In fact, there is a linux driver and a FreeBSD driver on Intels website, and the FreeBSD driver doesn't have the other IDs. I was able to patch the source and get FreeBSD to work. If you use the command lspci the last 4 digits on the end of the line is the ID. If you still have that board I'm curious what its ID was. -
On my new board ASRock Z97E-ITX/ac it's "Network controller: Broadcom Corporation Device 43b1 (rev 03)"
-
-
Anyone have any ideas with my hdd issues?
-
Zitat von "deadnbrkn84"
Anyone have any ideas with my hdd issues?
About using the 'pool' command? You need Enable Pool set on the Settings tab. -
I Cloned my SSD to my New Server and i have a http://www.asrock.com/mb/Intel/Z87E-ITX/ Board i try the Steps in the tut to install the new Networkcard but wenn i type
insmod e1000e.ko
the machine says
insmod: error inserting 'e1000e.lo': -1 Invalid module format
what should i do now?
i tryed this driver https://downloadcenter.intel.c…=Y&DwnldID=15817&lang=eng
but when i "make install" after extracting i get thiswhen i ifconfig -a
i have only lo no eth0 or something
now i put an other network card in to have internet connection. this card works but i want to use the onboard card.
-
-
Zitat von "ryecoaaron"
About using the 'pool' command? You need Enable Pool set on the Settings tab.So when I enable pool then it creates a pool under whatever name I call it but then all my Shares are inaccessible. Is there a way to create a pool and copy all my data from the other shares to it? I don't know how to create a pool and get my data to it since I can't access my shares once that pool is created
-
Any ideas on this? It's still saving everything to the one 4tb drive even though "pool" is now checked on
-
The snapraid "pooling" feature is only for reading. Now that I think about it, why would you need the pooling feature when you are using aufs?
-
-
I just wanted it to balance the data between all the drives and it's just dumping everything on the one 4tb drive as of right now. I didn't know the pooling was only for reading. I thought if MFS was checked then it would automatically dump the data on the drive with the most free space which isn't happening right now. Am I missing something?
-
aufs is r/w pooling. I don't know why it is only writing to one drive. if mfs is unchecked, it should still write to both drives unless everything you are writing is the same directory. Maybe changing configuration is not working right on aufs?? I assume you have been writing to the pool and not to an individual branch?
-
I've been writing to whatever the shared folder is that I created. Here is what I have it setup like:
-
-
And the branches
-
Just setup a test VM. Everything works ok on it. Does the user writing to the pool have r/w privileges (not acl) on the pool shared folder and all three branch shared folders?
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!