Ok can report back it worked. New Mainboard screwed in an working. All drives present.
I had to edit a few things including fstab while adding EFI partition. But i think OMV will not overwrite fstab changes if i remember correctly.
Ok can report back it worked. New Mainboard screwed in an working. All drives present.
I had to edit a few things including fstab while adding EFI partition. But i think OMV will not overwrite fstab changes if i remember correctly.
Everything seems to run fine exept my pihole container. Anyone got a clue why that would not start up after a hardware migration?
Here is the error msg:
500 - Internal Server Error
Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; docker compose --file '/srv/dev-disk-by-label-SSD_Data/appdata/YAML_Storage/pihole/pihole.yml' --env-file '/srv/dev-disk-by-label-SSD_Data/appdata/YAML_Storage/pihole/pihole.env' --env-file '/srv/dev-disk-by-label-SSD_Data/appdata/YAML_Storage/global.env' up -d 2>&1': Container pihole Starting Error response from daemon: failed to create endpoint pihole on network pihole2: network id "b84bd51806218f7bc0c6395c6b4942ed7a239f3b36cd187329be4810653f959a" not found
After doing a bit of reading this seems to be a common issue on Apple MAC systems.
MacVlan networking is not supported here.
Now i suppose that it might be a similar issue on this kind of hardware i am using.
Check this thread on Github: https://github.com/moby/libnetwork/issues/2614
Anyone got thoughts?
chente: are you running pihole on your friends OMV using this mainboard?
Might also be due to MAC address change on eth0?
chente: are you running pihole on your friends OMV using this mainboard?
I don't run pihole on that server.
I assume you reconfigured the network interface with omv-firstaid right? It is necessary when you change hardware. It may have changed the network interface name and affected your pihole2 docker network. I would try reconfiguring that docker network.
Actually I had already deleted the docker network and recreated it. Did not help.
Checking in OMV the network interface still appears as eth0 and all network is working fine.
BUT i just remarked that the interface indeed has a different name while using "ifconfig -a".
Would renaming in the docker config be enough or do i really have to run omv-firstdaid?
Current name is "enp2s0"
This post from votdev suggests to run "omv-salt stage run all"
Ok i changed the network interface in my docker yaml to "enp2s0" and now pihole is running fine.
I also ran "omv-salt stage run all" as suggested in the above thread.
No visible change.
Also ran omv-firstaid and now the network interface shows correctly in OMV GUI.
I thought that doing a omv-salt deploy run fstab might solve it. But it did not. Drive still shown twice.
Hmm ok seems like something is puzzled with the drive mapping in OMV.
config.xml:
</filesystem>
<!-- old systemdrive -->
<!-- <hdparm>
<uuid>1b6fdefc-0fc9-434a-876d-96e2cd822389</uuid>
<devicefile>/dev/disk/by-id/ata-Samsung_SSD_750_EVO_250GB_S33SNWAH887576R</devicefile>
<apm>127</apm>
<aam>0</aam>
<spindowntime>242</spindowntime>
<writecache>0</writecache>
</hdparm>-->
<hdparm>
<uuid>01829846-3936-48dc-a774-0e0e46ebb14f</uuid>
<devicefile>/dev/disk/by-id/ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M7ENVNFE</devicefile>
<apm>0</apm>
<aam>254</aam>
<spindowntime>0</spindowntime>
<writecache>0</writecache>
</hdparm>
<hdparm>
<uuid>3f00f26c-9635-48f6-a095-d834cebafc29</uuid>
<devicefile>/dev/disk/by-id/ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M5LP66Z3</devicefile>
<apm>1</apm>
<aam>254</aam>
<spindowntime>120</spindowntime>
<writecache>1</writecache>
</hdparm>
<hdparm>
<uuid>4ad667c4-d9fb-4c59-82e1-e20087547a8f</uuid>
<devicefile>/dev/disk/by-id/ata-Hitachi_HDS723020BLA642_MN1270FA05Z5HD</devicefile>
<apm>0</apm>
<aam>0</aam>
<spindowntime>0</spindowntime>
<writecache>0</writecache>
</hdparm>
<hdparm>
<uuid>08bb3a81-ed7f-4d29-a79e-1bea3bdfd459</uuid>
<devicefile>/dev/disk/by-id/ata-ST4000VN006-3CW104_WW6157Q4</devicefile>
<apm>0</apm>
<aam>254</aam>
<spindowntime>0</spindowntime>
<writecache>0</writecache>
</hdparm>
<hdparm>
<uuid>a70486bd-2eb2-48f0-9509-e0466faf6eef</uuid>
<devicefile>/dev/disk/by-id/ata-ST4000VN006-3CW104_WW613NRV</devicefile>
<apm>0</apm>
<aam>254</aam>
<spindowntime>0</spindowntime>
<writecache>0</writecache>
</hdparm>
</storage>
<fstab>
<mntent>
<uuid>6e515f60-1f5f-4ea9-a269-451ad7ed4c28</uuid>
<fsname>/dev/disk/by-label/SSD_Data</fsname>
<dir>/srv/dev-disk-by-label-SSD_Data</dir>
<type>ext4</type>
<opts>defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,discard,acl</opts>
<freq>0</freq>
<passno>2</passno>
<hidden>0</hidden>
<comment></comment>
<usagewarnthreshold>85</usagewarnthreshold>
</mntent>
<mntent>
<uuid>1317b87d-6816-4266-a058-3ff2436fdf51</uuid>
<fsname>/dev/disk/by-label/DataHitachi</fsname>
<dir>/srv/dev-disk-by-label-DataHitachi</dir>
<type>ext4</type>
<opts>defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl</opts>
<freq>0</freq>
<passno>2</passno>
<hidden>0</hidden>
<usagewarnthreshold>95</usagewarnthreshold>
<comment></comment>
</mntent>
<mntent>
<uuid>7a5381e3-ad3f-44a8-b975-1440e3391860</uuid>
<fsname>/dev/disk/by-label/DataWD</fsname>
<dir>/srv/dev-disk-by-label-DataWD</dir>
<type>ext4</type>
<opts>defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl</opts>
<freq>0</freq>
<passno>2</passno>
<hidden>0</hidden>
<usagewarnthreshold>95</usagewarnthreshold>
<comment></comment>
</mntent>
<mntent>
<uuid>f96438e5-2824-4981-8e5b-fe4df8fa9678</uuid>
<fsname>9379aef8-4c1e-43c1-ab84-31fd2aa5b875</fsname>
<dir>/srv/9379aef8-4c1e-43c1-ab84-31fd2aa5b875</dir>
<!-- <fsname>/dev/disk/by-label/DataRaid</fsname>
<dir>/srv/dev-disk-by-label-DataRaid</dir> -->
<type>fuse.mergerfs</type>
<opts></opts>
<freq>0</freq>
<passno>0</passno>
<hidden>1</hidden>
<comment></comment>
<usagewarnthreshold>85</usagewarnthreshold>
</mntent>
<mntent>
<uuid>61cfa259-0fb1-48a6-8a1c-f14f197bd147</uuid>
<fsname>/dev/disk/by-label/ParityWD</fsname>
<dir>/srv/dev-disk-by-label-ParityWD</dir>
<type>ext4</type>
<opts>defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl</opts>
<freq>0</freq>
<passno>2</passno>
<hidden>0</hidden>
<usagewarnthreshold>95</usagewarnthreshold>
<comment></comment>
</mntent>
<mntent>
<uuid>79684322-3eac-11ea-a974-63a080abab18</uuid>
<fsname>/dev/sda1</fsname>
<dir>/</dir>
<type>ext4</type>
<opts>errors=remount-ro,discard,noatime</opts>
<freq>0</freq>
<passno>1</passno>
<hidden>1</hidden>
</mntent>
</fstab>
<shares>
Alles anzeigen
blkid:
/dev/sdc1: LABEL="DataWD" UUID="c0a24e86-363b-412d-ade9-c1c01a2ab6de" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="646f2de2-2633-41ec-86f2-41fc93142f26"
/dev/sdb1: LABEL="DataHitachi" UUID="a72f1874-5021-4f68-aa17-710a7c414e27" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="4f027152-069d-413b-a000-92498dfc7c4a"
/dev/sda1: LABEL="ParityWD" UUID="fdcaae30-3efb-47a7-bab3-a0b78987c8d9" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="a918b332-de56-4c45-8834-c1268ee4753d"
/dev/sdd1: UUID="a72f30ef-09bd-4b1f-9f55-1122f120cab6" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="Linux filesystem" PARTUUID="112020c8-8a26-4bb9-a781-883a9cfea18c"
/dev/sdd3: LABEL="SSD_Data" UUID="6c9a3447-9243-47ef-84f8-69205fe4a1a2" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="Linux filesystem" PARTUUID="8b5e162a-f695-49f1-8a50-b3df87d7c117"
/dev/sdd4: SEC_TYPE="msdos" UUID="0D28-D1AD" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI-system" PARTUUID="7ac9cf10-0660-4f56-afdc-8f0f45dee223"
/dev/sdd5: UUID="cd074168-ed5e-4307-9c61-c7554ebb5a0a" TYPE="swap" PARTLABEL="Linux swap" PARTUUID="0f556396-a277-4dac-99ae-be9f700d30f0"
/dev/sdd2: PARTLABEL="BIOS boot partition" PARTUUID="dfe26bcf-a05d-4ad0-91eb-1754f3fae17e"
sda1 is mapped to a UUID that is not even shown in blkid as far as i see.
What would be the best way to fix this?
Uncomment this drive, as it seems to be an old leftover?
<mntent>
<uuid>79684322-3eac-11ea-a974-63a080abab18</uuid>
<fsname>/dev/sda1</fsname>
<dir>/</dir>
<type>ext4</type>
<opts>errors=remount-ro,discard,noatime</opts>
<freq>0</freq>
<passno>1</passno>
<hidden>1</hidden>
</mntent>
EDIT: found out this was wrong. The UUID 79... seems to be a standard OMV placeholder for the system drive.
I remapped the fsname to sdd1 as that is the current device location and now the drives show up correctly:
I know I posted a lot of stuff here and can fully understand if you guys lost track
But can anyone answer my last question: can i manually add entries to fstab?
If not, how would i move on adding this EFI partition then?
Ok so i think i believe to understand how this works.
fstab will not get altered by OMV. The entries in config.xml <mntent> are only to link the respective entries to the GUI logic.
So nothing to be done in the case of this EFI partition then i assume.
Is that correct?
But can anyone answer my last question: can i manually add entries to fstab?
The answer is yes and no.
Yes, if the added entries are outside of the openmediavault stanza.
I am not sure what exactly your refer to as Stanza. But i would guess any partitions that are directly accessed and used by OMV belong to it.
So i assume the EFI Partition does Not belong to it.
The openmediavault stanza is this in fstab:
>>> [openmediavault]
Everything within this section of the file.
<<< [openmediavault]
The openmediavault stanza is this in fstab:
>>> [openmediavault]
Everything within this section of the file.
<<< [openmediavault]
Aahhhh now i got it Even better i do remember. I had this knowledge already...but i forgot about it.
Thanks!
Ok now system seems to be clean except for the network interfaces part.
I think there are some leftovers of the NIC change from eth0 to enps02.
In dmesg I see a few weird things:
[ 17.485932] r8169 0000:02:00.0 enp2s0: Link is Up - 1Gbps/Full - flow control rx/tx
[ 17.485970] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0: link becomes ready
[ 18.826792] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
I dont know why eth0 is mentioned here. As far as i am concerned there is no more eth0 on this device:
root@debnas:~# nmcli device status
DEVICE TYPE STATE CONNECTION
br-062344f85949 bridge connected (externally) br-062344f85949
br-1dd16504404f bridge connected (externally) br-1dd16504404f
br-9c2807aa33d0 bridge connected (externally) br-9c2807aa33d0
br-b57450777507 bridge connected (externally) br-b57450777507
br-c97b03a534b8 bridge connected (externally) br-c97b03a534b8
docker0 bridge connected (externally) docker0
enp2s0 ethernet nicht verwaltet --
veth109f7c4 ethernet nicht verwaltet --
veth1d2686f ethernet nicht verwaltet --
lo loopback nicht verwaltet --
Alles anzeigen
Also i get these messages and a segfault (which might be connected to the above?):
[ 20.865127] device vethc422d57 entered promiscuous mode
[ 20.867866] docker0: port 2(vethc422d57) entered blocking state
[ 20.867877] docker0: port 2(vethc422d57) entered forwarding state
[ 20.868126] docker0: port 2(vethc422d57) entered disabled state
[ 20.890811] dhcpcd[969]: segfault at 8 ip 0000561a316992e0 sp 00007ffc2db90558 error 4 in dhcpcd[561a31696000+32000] likely on CPU 0 (core 4, socket 0)
[ 20.890839] Code: a0 00 00 00 48 8b 00 48 85 c0 74 45 66 0f 1f 44 00 00 48 39 c7 74 1a 8b 48 2c 85 c9 74 13 48 8b 88 c8 00 00 00 66 85 f6 75 07 <8b> 49 08 39 0a 74 0a 48 8b 40 08 48 85 c0 75 d8 c3 48 8d 50 18 48
also OMV does not know eth0:
Look what i found in my dmesg today:
[ 30.918687] ata4.00: exception Emask 0x11 SAct 0x40c0 SErr 0x0 action 0x6 frozen
[ 30.918739] ata4.00: irq_stat 0x48000008, interface fatal error
[ 30.918760] ata4.00: failed command: READ FPDMA QUEUED
[ 30.918779] ata4.00: cmd 60/08:30:46:55:2d/00:00:2a:00:00/40 tag 6 ncq dma 4096 in
res 41/84:01:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)
[ 30.918831] ata4.00: status: { DRDY ERR }
[ 30.918844] ata4.00: error: { ICRC ABRT }
[ 30.918859] ata4.00: failed command: READ FPDMA QUEUED
[ 30.918878] ata4.00: cmd 60/08:38:36:76:2d/00:00:2a:00:00/40 tag 7 ncq dma 4096 in
res 41/84:ff:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)
[ 30.918928] ata4.00: status: { DRDY ERR }
[ 30.918941] ata4.00: error: { ICRC ABRT }
[ 30.918955] ata4.00: failed command: READ FPDMA QUEUED
[ 30.918973] ata4.00: cmd 60/08:70:1e:16:6d/00:00:2a:00:00/40 tag 14 ncq dma 4096 in
res 41/84:ff:00:00:00/00:00:00:00:00/00 Emask 0x10 (ATA bus error)
[ 30.919019] ata4.00: status: { DRDY ERR }
[ 30.919032] ata4.00: error: { ICRC ABRT }
[ 30.919049] ata4: hard resetting link
[ 31.233360] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[ 31.239591] ata4.00: configured for UDMA/133
[ 31.239623] ata4: EH complete
Alles anzeigen
I wonder if this little M2 to SATA controller is any reliable....
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!