Check this out https://forum.openmediavault.o…V5-Pi4-This-might-fix-it/
This looks promising. Will give it a go. Thanks @morian
Check this out https://forum.openmediavault.o…V5-Pi4-This-might-fix-it/
This looks promising. Will give it a go. Thanks @morian
If you are using /sharedfolders/.......... paths in your docker Volume and Bind Mounts you are making a mistake. Use the actual location of the folders instead.
Sorta the point though shared folders. Why go digging for the srv disk ID really..... It's annoying. Why can't they just fix it so that docker spins up after the drives and sharedfolders routine is done...
@'wookash 'Go start a different thread please....
So this seems to be the issue, docker starting before the sharedfolders are mounted:
I use docker and portainer rather intensely on omv managing a bunch of crap. I structure most of the containers to have configs in one sharedfolder and content in another and map to those so really the only thing the container has locally is itself, so I can cycle em out and backup the configs and such to another storage on their own. This is helpful for plex cause it's a hog with its config dir so that goes on a faster drive separate from a larger media pool which is also on its own drive.
Anyway, the issue I'm having and this has surfaced a few times is that if the sharedfolders for whatever reason, maybe on boot? aren't available to the containers in time, the containers will write to the path /sharedfolders/stuff, and then even if the sharedfolders gets mounted right after, the app is writing underneath, then the disk fills up and OMV interface isn't accessible, and my / volume is full. Then I have to reboot, comment out the two drives that were mapped to these shares, delete the content that was written to the real / path on the local disk, and reboot the instance.
I'm not sure why in all the cases this has happened but I'm thinking maybe this isn't the best way, using sharedfolders, for docker like this but i'm not really sure about how to do this as an alternative. If I can mount a new disk somewhere else that guarantees it won't mount it if the disk isn't available, perhaps that's the right way, just not sure off the top of my head that's any different than what the shared folders is doing.
How do the sharedfolders map to the device? It seems like some hybrid between mounting and symlinking but i'm not familiar with how they work.
Maybe I should just write to the full /srv/dev-disk-xyz-0123 path?
Even in the time I wrote this, had to restart some containers after reboot, some of the underlying fs filled up"
before
a little later I think after I restarted the containers
Is there a way to delay the docker service from starting until the end of the boot up cycle?
This is an issue with my VLAN routing for VMs vs HOSTS so my fault. Thank god.
Sorry baby. I didn't mean to threaten I was going to leave you. I'll take you to dinner.
This might be a deal breaker for me at this stage.
So I did a fresh install of debian 10 and OMV and have the same issue.
The board is x10sdv-tln4f
Ethernet Connection X552/X557-AT 10GBASE-T
Is this some incompatibility with debian?
I just installed a new version of OMV next to an ubuntu VM fresh install 18.xx on the same proxmox host. They both use the same bridge. The new fresh OMV install refuses to run at 10Gb but the Ubuntu one works fine. Why can't OMV run at 10Gbit here? This might be a deal breaker for me at this stage.
Hello,
I have a strange problem.
I installed OMV5 on a host that had a 1Gb adapter in a KVM host on top of proxmox. I have since migrated this VM to a new host which uses a bridge on a 10Gb card. The KVM image, however still runs at 1Gb. I installed a new KVM host using Ubuntu on the same host, bridge etc. etc.. and get 10Gb speeds but with this older omv host, it won't adapt. If I run ethtool i obviously don't get anything as it's a virtual adapter.
Any idea how I can reinstall the driver or get it set right?
Is there anyway to mount remote NFS shares on OMV 5?
So I was able to get the array back but with only 5 drives. mdadm says it's clean but that isnt true as it should be 6 drives in the array. This was a major blow up and almost catastrophic as I was moving drives from the NAS to the new OMV arrays, where my backups were! oof.
Will save the data to another store and not try to add a new array with new disks on this setup until after it's offloaded to the NAS
Hello,
I had a RAID 6 array with 6 drives sd[bcdefg]
I added 3 drives and tried to create a new stripe to temporarily move the data too. When I tried to create a shared folder on the new striped array I got a huge error in the UI and everything took a crap. When I rebooted all raid volumes were gone, including the original good volume.
The filesystems are "missing"
For some reason md127 says it's now a raid0 array.
Examining the disks they are all RAID6 except sdg is RAID5 now? I have no idea why...
They say there are 5 raid devices, when there should be 6.
sdg seems to be royally borked and for some reason was used when I tried to create the new stripe.
root@omv:~# mdadm --examine /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 519165ce:33fd48e1:1b948ddf:db836d5a
Name : omv:volume1
Creation Time : Sat Aug3 11:00:10 2019
Raid Level : raid6
Raid Devices : 5
Avail Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Array Size : 29298917376 (27941.63 GiB 30002.09 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : a2f6bbb2:9ee0f2ac:cd2d287d:ac583392
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Oct 18 16:17:54 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 856b67ab - correct
Events : 47453
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
root@omv:~# mdadm --examine /dev/sdc
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 519165ce:33fd48e1:1b948ddf:db836d5a
Name : omv:volume1
Creation Time : Sat Aug3 11:00:10 2019
Raid Level : raid6
Raid Devices : 5
Avail Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Array Size : 29298917376 (27941.63 GiB 30002.09 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 05301c7f:e0707c69:8d798f65:a1a62730
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Oct 18 16:17:54 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 94b0cb05 - correct
Events : 47453
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 4
Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
root@omv:~# mdadm --examine /dev/sdd
/dev/sdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 519165ce:33fd48e1:1b948ddf:db836d5a
Name : omv:volume1
Creation Time : Sat Aug3 11:00:10 2019
Raid Level : raid6
Raid Devices : 5
Avail Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Array Size : 29298917376 (27941.63 GiB 30002.09 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : b14f7e66:ba66e835:d8082671:c8e7f874
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Oct 18 16:17:54 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 98e6b0f9 - correct
Events : 47453
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
root@omv:~# mdadm --examine /dev/sde
/dev/sde:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 519165ce:33fd48e1:1b948ddf:db836d5a
Name : omv:volume1
Creation Time : Sat Aug3 11:00:10 2019
Raid Level : raid6
Raid Devices : 5
Avail Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Array Size : 29298917376 (27941.63 GiB 30002.09 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 8cc20a5d:ea1725c0:c7f24137:4805728a
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Oct 18 16:17:54 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : f544dc74 - correct
Events : 47453
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
root@omv:~# mdadm --examine /dev/sdf
/dev/sdf:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 519165ce:33fd48e1:1b948ddf:db836d5a
Name : omv:volume1
Creation Time : Sat Aug3 11:00:10 2019
Raid Level : raid6
Raid Devices : 5
Avail Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Array Size : 29298917376 (27941.63 GiB 30002.09 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 9dc32668:bfb9214e:e8a5b48e:680ff584
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Oct 18 16:17:54 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : e0533c9c - correct
Events : 47453
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
root@omv:~# mdadm --examine /dev/sdg
/dev/sdg:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : beceed91:d117821d:129ca027:896c33f8
Name : omv:volume1
Creation Time : Wed Aug1 15:04:05 2018
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 19532614656 (9313.88 GiB 10000.70 GB)
Array Size : 29298917376 (27941.63 GiB 30002.09 GB)
Used Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Data Offset : 259072 sectors
Super Offset : 8 sectors
Unused Space : before=258984 sectors, after=3072 sectors
State : clean
Device UUID : 82292aed:dec222ca:ea7f0498:f5101e92
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Jul 31 15:13:54 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 7f99f59f - correct
Events : 114145
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing).
cat /proc/mdstat
md127 : inactive sdg[3](S)
9766307328 blocks super 1.2
/dev/md127:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Working Devices : 1
Name : omv:volume1
UUID : beceed91:d117821d:129ca027:896c33f8
Events : 114145
Number Major Minor RaidDevice
- 8 96 - /dev/sdg
/dev/sdg:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : beceed91:d117821d:129ca027:896c33f8
Name : omv:volume1
Creation Time : Wed Aug 1 15:04:05 2018
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 19532614656 (9313.88 GiB 10000.70 GB)
Array Size : 29298917376 (27941.63 GiB 30002.09 GB)
Used Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Data Offset : 259072 sectors
Super Offset : 8 sectors
Unused Space : before=258984 sectors, after=3072 sectors
State : clean
Device UUID : 82292aed:dec222ca:ea7f0498:f5101e92
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Jul 31 15:13:54 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 7f99f59f - correct
Events : 114145
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
Alles anzeigen
Can i safely rebuild this array? I wasn't even touching the original array. I'm a little concerned as /dev/md127 says RAID0 now too. I assume to rebuild it right, this should be raid6 with total devices 6 but I don't mind creating md0 with raid6 and 5 devices in a degraded state if it means getting the array back temporarily..
maybe:
mdadm --assemble /dev/md0 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
then add /dev/sdg after?
I unbanned that account quite a while ago.
thank you much for that and your input on the proxmox thread!
I don't know. Do you? I don't use OMV or portainer for my docker stuff.
Ha,
I don't really, but I really want to keep using OMV because I love it so much, but part of me is thinking that if I use proxmox and portainer with swarm i wouldn't need OMV anymore?
Ok thank you for the input!
I think what I'll do is put proxmox on all the nodes and:
1. Install an OMV VM
2. Install portainer
3. Create VMs on the other hosts with Ubuntu 18 LTS that takes all the resources
4. Run Swarm on all the VMs with one managed server in one of them
5. Setup portainer and remote agent on those VMs
6. Have main array local to the omv VM host
7. Profit?
Hmmm. Do I really need OMV for this? What does OMV provide if I just go with portainer and manage nodes with proxmox and portainer for container management?
But my containers are on a separate VM that is only containers.
Hi,
Can you elaborate on what you mean here? Running containers in a VM somewhat blows my mind sometimes.
Also, what's the preferred way to deal with shared storage? Should I just have the array as a data volume that I mount over NFS to all the nodes or do I break apart the array and mount them locally to each node?
My main need is really the media volume that Plex uses. How can I share this in a way that a second Plex container or failover container can use the volume if it's not attached to a different node that where the container resides.
Seed
Im doing a reorg of all my servers, cleaning some up, upgrading others and all that stuff and I'm looking for the best way to pool all of these resources for a refreshed lab. This is what I have.
asrock rack AMD 12C/24T - 32GB RAM - 120GB SSD
asrock rack XEON v3 12C/24T - 64GB - 120GB SSD
hp microserver gen 8 2C - 10GB 120GB SSD
hp microserver gen 8 4C/8T - 16GB - 120GB SSD
asrock rack avoton 8C- 32GB - 120GB SSD (Will probably retire)
asrock i5 3600 6C - 16GB - 120GB SSD (Current OMV 5 with external storage attached), 512GB NVMe drive(cache drive)
Large disk array over an LSI SAS9200-16E - Container storage and data storage
2nd large array with no disks. I may split the disks into both boxes and raid 10 them so I have better failover there. Currently they're all RAID 6 in one array box.
I am looking for interesting ways to be able to leverage all of these as a pool. I was originally thinking i'd use portainer and just have omv on a host natively that runs my core VMS and a portainer vm that maybe can manage the other nodes container services, but now I'm thinking maybe i'll use proxmox natively, and run omv inside as a vm. I want to be sure I dont suffer issues with my Plex container as well and perhaps can still use it for passthrough to the disk so I dont lose out on much if any overhead?
Is it wise to run proxmox across the board then setup OMV as a VM then setup my containers in the OMV VM? Will I have better resiliency this way? One area i'm concerned about is if my OMV node dies today i'll be left scrounging.
Any ideas welcome as I refresh everything.
Im not sure what you mean by a version but I highly recommend using portainer and running a plex container from plexinc or linuxserver. If you haven't done this before it may seem daunting but it's really the only way to go. You'll be happy once you do it.
I just wanted to share that a migration I did from OMV 4 to OMV 5 went very well. I have an external array attached to a Xeon pc. I had an i5 CPU around so decided to build a mitx setup for testing.
I installed OMV 5 from USB, launched the GUI, rebooted but attached the array this time. Remounted the volume and remapped the sharedfolders.
I then added extras and setup portainer as i use this setup primarily to host all my media and run a few containers. Fired up updated versions of the containers, bound the folders to the new container setup and all is well.
There were a couple issues like the docker service and boot as well as a drive issue with sda/sdb and such that I’m familiar with but overall this migration went very smoothly and the new setup is running really well. Portainer is very slick and a big step up IMO.
Just wanted to share a success story going from 4 to 5 so others can feel more confident.
Ive used synology, xpenology, FreeNAS before but I love OMV. The only way to go.
Thank you to those that build and maintain OMV.
I'm trying to fire up a vanilla ubuntu:latest, tried both xenial and bionic, and it doesn't start. There's no log either. it just says failed. Any ideas how to troubleshoot this?
Sure.
You'll have to open the ports as well for the host interface to the docker container interface.
Also, from my experience you have to run in privileged mode so openvpn can use the /dev/tun device