I've tried attaching the drives to a PCIe SATA adapter, same deal.
Beiträge von sardaukar
-
-
Ever since migrating to OMV 5.x, I get random ZFS resilvering events at usually the 8/9 day uptime mark, but sometimes just 2 days go by. Not sure what's wrong, but I get this on dmesg:
Code
Alles anzeigen[181263.131531] sde: sde1 sde9 [181264.346713] sdc: sdc1 sdc9 [181265.722670] sdb: sdb1 sdb9 [181267.026973] sdd: sdd1 sdd9 [181269.223494] sde: sde1 sde9 [181270.562879] sdb: sdb1 sdb9 [181271.955974] sdd: sdd1 sdd9 [181273.965705] sde: sde1 sde9 [181275.147701] sdb: sdb1 sdb9 [181276.085743] sdd: sdd1 sdd9 [181276.880991] sde: sde1 sde9 [181278.314671] sde: sde1 sde9 [181309.204054] sde: sde1 sde9
Over and over again, the only solution is rebooting. Is this a hardware issue with my new motherboard? Has anyone seen something like this before?
Thanks!
-
Anyone got pointers on a good solution to browse photos on your NAS? I'd love to use something Docker-based, either a dedicated gallery or a file explorer that allows browsing images, with EXIF parsing.
I've tried Piwigo, Zenphoto, Filerun and a few others, but Piwigo is a bit stiff with album creation, Zenphoto too and couldn't get Filerun to connect to its MySQL DB with the given compose file.
Ideally, I'd just like to Docker-bind my server's root on an image (as read-only) and browse, with bonus points for a login system.
-
Managed to fix it by replacing "-H fd://" with "-H unix://" in /etc/systemd/system/docker.service.d/openmediavault.conf
-
Managed to fix it by replacing "-H fd://" with "-H unix://" in /etc/systemd/system/docker.service.d/openmediavault.conf
-
Seeing this too
Code
Alles anzeigenNov 08 08:52:43 omv4 systemd[1]: Starting Docker Application Container Engine... -- Subject: Unit docker.service has begun start-up -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- Unit docker.service has begun starting up. Nov 08 08:52:43 omv4 dockerd[11288]: Failed to load listeners: no sockets found via socket activation: make sure the service was started by systemd Nov 08 08:52:43 omv4 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE Nov 08 08:52:43 omv4 systemd[1]: Failed to start Docker Application Container Engine. -- Subject: Unit docker.service has failed -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- Unit docker.service has failed. -- -- The result is failed. Nov 08 08:52:43 omv4 systemd[1]: docker.service: Unit entered failed state. Nov 08 08:52:43 omv4 systemd[1]: docker.service: Failed with result 'exit-code'.
-
@sardaukar Successful finished meanwhile?
@cabrio_leo sorry for the late reply - yes, it worked
-
-
It's doing it now at 151M/s with 8h38m to go, down from the initial estimate of 200M/s.
-
@tkaiser here it is:
-
thanks @cabrio_leo resilvering 5.99TB now with an ETA of 9h44m
-
@tkaiser so, basically, doing what it says here: https://askubuntu.com/question…ng-a-dead-disk-in-a-zpool ? And after the resilvering is done, detach the old disk?
-
Hello!
One of the drives in my ZFS pool (the only one I have) is bust. Here's my pool's config:
CodeNAME STATE READ WRITE CKSUM zpool0 DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1LA93V8 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1DKCLHF FAULTED 1 1.55K 10 too many errors ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5NZV3UY ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5NZV6J6 ONLINE 0 0 0
How can I replace it? Just power down the NAS and replace the drive and run scrub? Or do I have to plug the new drive in and do a detach/attach?Thanks for any help!
-
Thanks for the help, fixed!
-
I have the same issue - my /var/www/openmediavault/extjs6 folder doesn't exist, so the ExtJs 6 references in the source are all 404ing. How do I get these files?? (I'm on 3.0.79)
-
I've been trying out 3.x and so far, a bit rough around the edges but very usable.
I do have an error with RRD graphs - when I run omv-mkgraph this is the output:
I notice none of the files are there inside /var/lib/rrdcached/db, so I created this one by hand but still no go.
What can I do to fix this? Thanks!
-
I'm on the latest 2.x, on the same hardware that was working perfectly before. I did compile my own 4.6.5 so that I could use Docker AuFS, but that was weeks ago. Why only now?
-
I've used OMV for a couple of years now, with no issues. But lately it's been buggy as hell. I've been getting these errors, any idea what this might be?
Code
Alles anzeigenAug 20 04:44:32 omv kernel: [60739.306892] ------------[ cut here ]------------ Aug 20 04:44:32 omv kernel: [60739.306921] WARNING: CPU: 1 PID: 347 at drivers/md/raid5.c:1618 raid_run_ops+0x153c/0x15a0 [raid456] Aug 20 04:44:32 omv kernel: [60739.306924] Modules linked in: pci_stub vboxnetflt(O) vboxdrv(O) veth xt_nat xt_tcpudp ipt_MASQUERADE nf_nat_masquerade_ipv4 xfrm_user xfrm_algo iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 xt_addrtype xt_conntrack nf_nat nf_conntrack br_netfilter bridge stp llc xt_multiport iptable_filter ip_tables x_tables cpufreq_powersave cpufreq_conservative cpufreq_userspace cpufreq_stats softdog nfsd auth_rpcgss oid_registry nfs_acl nfs lockd grace sunrpc loop radeon joydev evdev input_leds snd_hda_codec_realtek psmouse snd_hda_codec_generic snd_hda_codec_hdmi edac_mce_amd serio_raw edac_core k10temp snd_hda_intel pcspkr sp5100_tco snd_hda_codec fam15h_power ttm snd_hda_core i2c_piix4 drm_kms_helper snd_hwdep snd_pcm snd_timer drm video snd button soundcore acpi_cpufreq tpm_tis tpm dm_mod raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx md_mod sg sd_mod hid_generic usbhid hid ahci libahci libata ohci_pci r8169 ohci_hcd scsi_mod fjes [last unloaded: vboxnetadp Aug 20 04:44:32 omv kernel: ] Aug 20 04:44:32 omv kernel: [60739.307033] CPU: 1 PID: 347 Comm: md0_raid5 Tainted: G O 4.6.5-sardaukar #1 Aug 20 04:44:32 omv kernel: [60739.307037] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./QC5000-ITX, BIOS P1.10 09/09/2014 Aug 20 04:44:32 omv kernel: [60739.307041] 0000000000000286 0000000000000000 ffffffff815c7aed 0000000000000000 Aug 20 04:44:32 omv kernel: [60739.307046] 0000000000000000 0000000000000000 ffffffff810d88a4 0000000000000001 Aug 20 04:44:32 omv kernel: [60739.307051] ffff88015dfcb460 ffff880158d9f860 ffff88015dfcb988 ffff8801343c7cb8 Aug 20 04:44:32 omv kernel: [60739.307056] Call Trace: Aug 20 04:44:32 omv kernel: [60739.307068] [<ffffffff815c7aed>] ? dump_stack+0x5d/0x80 Aug 20 04:44:32 omv kernel: [60739.307074] [<ffffffff810d88a4>] ? __warn+0xf4/0x110 Aug 20 04:44:32 omv kernel: [60739.307081] [<ffffffffa01388ec>] ? raid_run_ops+0x153c/0x15a0 [raid456] Aug 20 04:44:32 omv kernel: [60739.307088] [<ffffffffa01344f4>] ? analyse_stripe+0xc4/0x660 [raid456] Aug 20 04:44:32 omv kernel: [60739.307094] [<ffffffffa0137240>] ? ops_complete_compute+0x70/0x70 [raid456] Aug 20 04:44:32 omv kernel: [60739.307100] [<ffffffffa013dd89>] ? handle_stripe+0xae9/0x1d60 [raid456] Aug 20 04:44:32 omv kernel: [60739.307105] [<ffffffff815c792f>] ? cpumask_next_and+0x1f/0x40 Aug 20 04:44:32 omv kernel: [60739.307112] [<ffffffffa013f32e>] ? handle_active_stripes.isra.51+0x32e/0x480 [raid456] Aug 20 04:44:32 omv kernel: [60739.307119] [<ffffffff81110260>] ? put_prev_entity+0x40/0x690 Aug 20 04:44:32 omv kernel: [60739.307125] [<ffffffffa013fa4a>] ? raid5d+0x4ba/0x760 [raid456] Aug 20 04:44:32 omv kernel: [60739.307132] [<ffffffff8197926b>] ? __schedule+0x29b/0x8f0 Aug 20 04:44:32 omv kernel: [60739.307141] [<ffffffffa01108ca>] ? md_thread+0xfa/0x130 [md_mod] Aug 20 04:44:32 omv kernel: [60739.307145] [<ffffffff81119910>] ? add_wait_queue+0x60/0x60 Aug 20 04:44:32 omv kernel: [60739.307152] [<ffffffffa01107d0>] ? super_1_load+0x560/0x560 [md_mod] Aug 20 04:44:32 omv kernel: [60739.307158] [<ffffffff810f77c1>] ? kthread+0xc1/0xe0 Aug 20 04:44:32 omv kernel: [60739.307164] [<ffffffff8197d222>] ? ret_from_fork+0x22/0x40 Aug 20 04:44:32 omv kernel: [60739.307168] [<ffffffff810f7700>] ? kthread_worker_fn+0x160/0x160 Aug 20 04:44:32 omv kernel: [60739.307172] ---[ end trace 0cad9b5dc2b2c5ac ]---
Thanks for any pointers...
-
Well, I've compiled 4.6.5 and it still doesn't work with VirtualBox. I get this trying to recompile the kernel modules:
Code
Alles anzeigen/tmp/vbox.0/r0drv/linux/memobj-r0drv-linux.c: In function ‘rtR0MemObjNativeFree’: /tmp/vbox.0/r0drv/linux/memobj-r0drv-linux.c:581:21: error: implicit declaration of function ‘page_cache_release’ [-Werror=implicit-function-declaration] /tmp/vbox.0/r0drv/linux/memobj-r0drv-linux.c: In function ‘rtR0MemObjNativeLockUser’: /tmp/vbox.0/r0drv/linux/memobj-r0drv-linux.c:1039:29: warning: passing argument 1 of ‘get_user_pages’ makes integer from pointer without a cast [enabled by default] In file included from /tmp/vbox.0/r0drv/linux/the-linux-kernel.h:88:0, from /tmp/vbox.0/r0drv/linux/memobj-r0drv-linux.c:31: include/linux/mm.h:1288:6: note: expected ‘long unsigned int’ but argument is of type ‘struct task_struct *’ /tmp/vbox.0/r0drv/linux/memobj-r0drv-linux.c:1039:29: warning: passing argument 2 of ‘get_user_pages’ makes integer from pointer without a cast [enabled by default] In file included from /tmp/vbox.0/r0drv/linux/the-linux-kernel.h:88:0, from /tmp/vbox.0/r0drv/linux/memobj-r0drv-linux.c:31: include/linux/mm.h:1288:6: note: expected ‘long unsigned int’ but argument is of type ‘struct mm_struct *’ /tmp/vbox.0/r0drv/linux/memobj-r0drv-linux.c:1039:29: warning: passing argument 5 of ‘get_user_pages’ makes pointer from integer without a cast [enabled by default] In file included from /tmp/vbox.0/r0drv/linux/the-linux-kernel.h:88:0, from /tmp/vbox.0/r0drv/linux/memobj-r0drv-linux.c:31: include/linux/mm.h:1288:6: note: expected ‘struct page **’ but argument is of type ‘int’ /tmp/vbox.0/r0drv/linux/memobj-r0drv-linux.c:1039:29: warning: passing argument 6 of ‘get_user_pages’ makes pointer from integer without a cast [enabled by default] In file included from /tmp/vbox.0/r0drv/linux/the-linux-kernel.h:88:0, from /tmp/vbox.0/r0drv/linux/memobj-r0drv-linux.c:31: include/linux/mm.h:1288:6: note: expected ‘struct vm_area_struct **’ but argument is of type ‘int’ /tmp/vbox.0/r0drv/linux/memobj-r0drv-linux.c:1039:29: error: too many arguments to function ‘get_user_pages’ In file included from /tmp/vbox.0/r0drv/linux/the-linux-kernel.h:88:0, from /tmp/vbox.0/r0drv/linux/memobj-r0drv-linux.c:31: include/linux/mm.h:1288:6: note: declared here cc1: some warnings being treated as errors make[2]: *** [/tmp/vbox.0/r0drv/linux/memobj-r0drv-linux.o] Error 1 make[1]: *** [_module_/tmp/vbox.0] Error 2 make: *** [vboxdrv] Error 2
after the patch mentioned on the first page, and another one I googled for, it's all good
-
I'm trying to run a container (https://hub.docker.com/r/garywiz/docker-grav/) that requires --create-user appended to the run statement, but the plugin tells me "unknown flag". Is there a way to do this?