I as able to get into the GUI and removed the 6.14 and 6.8.12 kernals and make the default 6.8.10.
Appears to be working now....
I as able to get into the GUI and removed the 6.14 and 6.8.12 kernals and make the default 6.8.10.
Appears to be working now....
I tried loading in 6.8 from Grub - but same results
What is the best way to accomplish your recommendation?.
I was able to get a bit further booting to 6.8.12-8 (12-10 gave same results)
I can SSH into it now
I can get into the Grub bootloader screen
Well, surprise - that made a difference - I feel like such an idiot
I am a goof - I missed the DNS settings down at the bottom in tiny print doh. will see what happens now that I added the DNS entries
I had my network set to the IP of my firewall - changing it to directly use 1.1.1.1 to see what happens
same error - even after reboot.
Long time user of OMV since initial beta release. Was running version 6 - SSD it was installed on up and died completely. Installed new SSD, installed v7. Applied all updates and installed latest Proxmox Kernal 6.8.4.2-pve. Installed ZFS plugin. Updates show a lot of updates for ZFS and Proxmox - but will not install - keep getting an error;
nslookup gives this result
hitting the site directly gives
Any thoughts as to what might be going on??
Thanks in advance.
George
root@omv:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 37.3G 0 disk
├─sda1 8:1 0 33.3G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 4G 0 part
sdb 8:16 0 2.7T 0 disk
├─sdb1 8:17 0 2.7T 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 2.7T 0 disk
├─sdc1 8:33 0 2.7T 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 2.7T 0 disk
├─sdd1 8:49 0 2.7T 0 part
└─sdd9 8:57 0 8M 0 part
sde 8:64 0 2.7T 0 disk
├─sde1 8:65 0 2.7T 0 part
└─sde9 8:73 0 8M 0 part
root@omv:~# blkid
/dev/sda1: UUID="8fe85a12-e374-4077-b74a-ee94e3f03c9a" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="27cfb606-01"
/dev/sda5: UUID="12ef4d5b-ca30-465c-b2df-0c973f406975" TYPE="swap" PARTUUID="27cfb606-05"
/dev/sdb1: LABEL="HULK" UUID="9330893893053432853" UUID_SUB="15546707825607062170" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="zfs-aea0ce1d969688a4" PARTUUID="8a80deed-9445-3c41-95ac-b41f029a9c80"
/dev/sdc1: LABEL="HULK" UUID="9330893893053432853" UUID_SUB="1917498448643848570" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="zfs-51a133e26a7b7692" PARTUUID="ebfd32dc-cecb-3c4c-8f06-6cea7bdce066"
/dev/sdd1: LABEL="HULK" UUID="9330893893053432853" UUID_SUB="13708763445564949633" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="zfs-4996ce0229ce86c9" PARTUUID="ee814313-d048-4b46-bdec-e012e2b91a1a"
/dev/sde1: LABEL="HULK" UUID="9330893893053432853" UUID_SUB="13287085235517311875" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="zfs-4b8bf50813b1abca" PARTUUID="69314e55-1427-5348-a571-5d125e150d68"
/dev/sdb9: PARTUUID="ae917c21-1e12-8a43-9363-eb6f0a6d224b"
/dev/sdc9: PARTUUID="89c67b54-7984-8946-af6d-d1c5e09e9d54"
/dev/sdd9: PARTUUID="c87f65b4-67cd-b84e-8fc9-f2f0487de137"
/dev/sde9: PARTUUID="9d69f21c-6491-de4a-9a78-53e5056d3d63"
root@omv:~#
/HULK/3DModels for all shares under absolute path - Array name is Hulk
it looks like it is the folder with all my share folders and the files in them
500 - Internal Server Error
Removing the directory '/srv/a60da872-c67b-49e6-b0ad-1c94d65eadf4' has been aborted, the resource is busy.
After a recent update (currently running version 6.9.11-4), I went to confirm pending configuration changes - and it never successfully completes. I have tried a few things I found in the forums (resetting the salt), and that did not work. Any words of wisdom?
Thank you
Current services; Docker, FlashMemory, MiniDLNA, NFS, SMB/CIFS and SSH.
Pending configuration changes
You must apply these changes in order for them to take effect.
The following modules will be updated:
avahi
collectd
fstab
monit
quota
remotemount
rsync
sharedfolders
systemd
task
Running the following;
OMV 5.5.14-1
ZFS Plugin 5.0.5
Kernal 5.4.44-2.pve
After latest OMV update - my ZFS pool cannot be seen by shares. I tried changing to various kernels (5.4.65.1-pve and 5.4.44-1.pve) but still no luck.
If I click on Shared Folders - all shares show n/a under Device. If I export and import Pool - it shows up under ZFS with status "ok", but if I try to create a new test share - the device dropdown is empty.
If I try to edit existing share - I get the following error;
Failed to execute XPath query '//system/fstab/mntent[uuid='47753d56-a12b-4c70-bd97-89bbf3aff968']'.
Here are details:
Error #0:
OMV\Config\DatabaseException: Failed to execute XPath query '//system/fstab/mntent[uuid='47753d56-a12b-4c70-bd97-89bbf3aff968']'. in /usr/share/php/openmediavault/config/database.inc:78
Stack trace:
#0 /usr/share/openmediavault/engined/rpc/sharemgmt.inc(234): OMV\Config\Database->get('conf.system.fil...', '47753d56-a12b-4...')
#1 [internal function]: Engined\Rpc\ShareMgmt->get(Array, Array)
#2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
#3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('get', Array, Array)
#4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('ShareMgmt', 'get', Array, Array, 1)
#5 {main}
Any thoughts on what to try to get access to my share back?
TIA
George
I will try to get that info - with our new semester starting I have been busy getting labs ready, updating our Cisco curriculum to the new v7, and lots of other prep work.
And now I have the fun experience of the UDM Pro that I manage 2000 miles took an auto update the other day - and now I cannot connect through the cloud dashboard. Lots of people having the same issue - with the WAN port and its ability to auto-negotiate - - requires an on-site fix - sigh....
Thanks
George
I have played a lot with the UDM Pro's Port Forwarding rules - and never got it to work on this Nextcloud installation. The larger issue I have is I cannot even access the Nextcloud GUI from the LAN - and I have tried changing every .config and .php trouble shooting guide I could find. Still no luck