i'm currently sitting on 5.15.35-1-pve, I have never tried to install 5.13.x let me see if i can figure out how to.
Posts by Majorpayne
-
-
This is still a issue and now i'm on version 5.15.35-1-pve
-
This is still continuing to happen on the newest build.
-
Yeah, Installed it about 30 mins after my last post. Do you know if the off proxmox kernels work with ZFS?
-
Kind of surprising because proxmox is so network connected. What card is it?
On board ASUS Prime Z690M-Plus D4 - i219-v causing Detected hardware unit hang. I'm running kernel 5.15.30-1
-
I'm using a 12th Gen Intel(R) Core(TM) i5-12600K, my only issue is my NIC with the Proxmox Kernel
-
What proxmox kernel are you using? Did you try the newest 5.15?
5.15.30-1
-
I need to find a NIC that will work with the Proxmox kernel that won't cause Detected Hardware Unit Hang (my onboard is causing this), I have tried what one user said to try it did not work although I'm not sure if my code is correct.
I'm to the point I'm almost desperate. I can fix this when I'm home which is 99% of the time but I'm about to travel and need to make sure this does not happen when I'm away and the wife can't watch her tv. Please help.
-
Anyone else on this? Maybe I'm doing it wrong.. maybe i'm using the wrong code.. please help i'm going away for work for a week and can't have this go down while I'm away and wife can't get to her shows
-
-
Thanks I'll give this a shot.
-
Good Morning,
I have been having issues with my Onboard networking. It's a i219-v and I will at times get Detected hardware unit hang. When this happens, I can't access the GUI. I'm using the Proxmox kernel as i'm also using ZFS.
I have found an article on what can fix this but I'm not familiar enough with the package. I just want to make sure i'm not going to cause myself other issues in the future.
Article: How To Fix Proxmox Detected Hardware Unit Hang On Intel NICs (f2h.cloud)
TL:DR
-
Majorpayne Did you do a "badblocks" test on your 16TB drives before putting then into use? I know it takes a long time to do.
In addition to checking smart data, here's an interesting article about using the zpool iostat command to monitor an identify possible disk problems: https://klarasystems.com/artic…ol-perfomance-and-health/
No, I did not attempt a bad blocks test.
i will also look at the zpool iostatI want to preference this normally happens when I'm unrarin something.
-
That's indicative of a failing drive even if there are no other signs
Although it's not impossible these are brand new 16TB Enterprise drives. I'll have smart look into it,
Wonder if it's the on board SATA. Thinking I should move to the LSI card instead. Would I lose data if i did that? -
It’s just using the ram for cache. If you use a ram intensive program then it will lower how much cache it uses. Anyway, what’s the point of having a load of unused memory? This way you get use of it.
I have no problem of it using it as long as it free it up if I need it for other process. I also wish i could use one of my nvme's as a cache also. these cpu_iowaits are bothering me lol
-
Sorry to bring this from the dead...
Will the usage ever drop I have 32gb of ram and It's currently using 71% I have about 10 dockers running. Should I increase my available ram to 64 or just keep an eye on it for now and see what happens when I use more dockers.
-
Thanks, Made the change.
-
Yeah just checked and noticed that Debian GNU/Linux, with Linux 5.16.0-0.bpo.4-amd64 was selected again. Why did it change on it's own. Heck for that matter why did it stop working prior to boot even.
-
Good Morning,
I woke up this morning and noticed that OMV was slow to load the dashboard. I decided to reboot the system and lo and behold my ZFS Pools are gone and instead the page is returning a Internal Server error 500 below is the error. 99% of the data stored in the shares are gone. I have included blkid
Code
Display MoreFailed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; zfs list -p -H -o name,mountpoint -t filesystem' with exit code '1': OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; zfs list -p -H -o name,mountpoint -t filesystem' with exit code '1': in /usr/share/php/openmediavault/system/process.inc:197 Stack trace: #0 /usr/share/omvzfs/Utils.php(441): OMV\System\Process->execute(Array, 1) #1 /usr/share/omvzfs/Filesystem.php(51): OMVModuleZFSUtil::exec('zfs list -p -H ...', Array, 1) #2 /usr/share/php/openmediavault/system/filesystem/backend/zfs.inc(32): OMVModuleZFSFilesystem::getAllFilesystems() #3 /usr/share/php/openmediavault/system/filesystem/backend/manager.inc(280): OMV\System\Filesystem\Backend\Zfs->enumerate() #4 /usr/share/php/openmediavault/system/filesystem/filesystem.inc(792): OMV\System\Filesystem\Backend\Manager->enumerate() #5 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(182): OMV\System\Filesystem\Filesystem::getFilesystems() #6 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->enumerateFilesystems(NULL, Array) #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #8 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(380): OMV\Rpc\ServiceAbstract->callMethod('enumerateFilesy...', NULL, Array) #9 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->getList(Array, Array) #10 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #11 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('getList', Array, Array) #12 /usr/share/php/openmediavault/rpc/serviceabstract.inc(588): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusgy...', '/tmp/bgoutputF2...') #13 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure)) #14 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(519): OMV\Rpc\ServiceAbstract->callMethodBg('getList', Array, Array) #15 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->getListBg(Array, Array) #16 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #17 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('getListBg', Array, Array) #18 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('FileSystemMgmt', 'getListBg', Array, Array, 1) #19 {main}
Code
Display Moreroot@openmediavault:~# blkid /dev/nvme0n1p1: UUID="0589-F33E" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="ef033bab-7a3f-4ac3-ac43-167ddc06c616" /dev/nvme0n1p2: UUID="09381372-f851-4546-aee8-a09d7adb0dd1" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="ca1bcea6-37af-48b4-92be-f109e8e478fd" /dev/nvme0n1p3: UUID="e6979fe8-392b-4d57-9b61-f652d4cddb7b" TYPE="swap" PARTUUID="b5be99a9-fb76-4c43-a10f-5599a3ed5845" /dev/nvme2n1p1: UUID="da6c2005-dd2b-46a3-a316-b3dd2be8234c" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="aba8de09-ae9b-485a-8e79-c00a00155c6a" /dev/nvme1n1p1: UUID="579c6cde-d8b4-4895-a757-683c5b9bf007" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="4fc5e650-1d22-464b-894f-ec22d1b8625e" /dev/sda1: LABEL="Fileserver" UUID="3859620777958749484" UUID_SUB="2864656269252861325" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="zfs-35d4b5d440cb53aa" PARTUUID="7654c55a-6852-5447-929a-9e0c5b3504f4" /dev/sdc1: LABEL="Fileserver" UUID="3859620777958749484" UUID_SUB="3374499318711039503" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="zfs-a154f2d0bf640f62" PARTUUID="4283d8e0-8c80-fa41-bf88-944f6cdc9c54" /dev/sdb1: LABEL="Fileserver" UUID="3859620777958749484" UUID_SUB="17120737476732298041" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="zfs-e63320f7a77e2e11" PARTUUID="8aab2a1f-4930-c94f-824c-f84d0785b40d" /dev/sdd1: LABEL="Fileserver" UUID="3859620777958749484" UUID_SUB="1273765534249988387" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="zfs-e26f8ea713768425" PARTUUID="a2ce45a9-0dc6-9348-b1d6-ac69e9ddac84" /dev/sda9: PARTUUID="4688e935-24b0-df45-8e8d-262d007a7b07" /dev/sdc9: PARTUUID="793c8eb2-c843-2c41-993f-1fdc95953f63" /dev/sdb9: PARTUUID="61208670-58b4-0548-aaa6-a6e0ac729706" /dev/sdd9: PARTUUID="ec53b964-eb15-3c42-b262-b3e9eb61a45a"
-
Use the cli and import the pool on the web ui
I have never tried cli for zfs before. this a good starting point?
ZFS command line reference (Cheat sheet) – It’s Just Bytes… (wordpress.com)