Hi,
first of all, thanks for the great pice of Distri. It was easy to install and setup.
I have a problem, where i ´am not sure if it comes from using KVM on 12.04.3 with passtroughing network and ibm Serveraid M5015 with 3 WD Red 3 TB harddisk on it.
Everything seems to works, CIFS, NFS with avahi to VDR. only owncloud and lvm is activated, on the serveraid i defined a lvm with omv.
Failure:
- when i copy a large amount of data to the omv attatched lvm it runs for about 1 hour, then the network connection is refused, copyprocess is brocken and stopped
- it doesn´t matter if copying over cifs or nfs
What i found in log:
Jan 14 09:03:49 omv kernel: [ 2.832747] PM: Starting manual resume from disk
Jan 14 09:03:49 omv kernel: [ 2.832749] PM: Resume from partition 254:5
Jan 14 09:03:49 omv kernel: [ 2.832750] PM: Checking hibernation image.
Jan 14 09:03:49 omv kernel: [ 2.832816] PM: Error -22 checking image file
Jan 14 09:03:49 omv kernel: [ 2.832818] PM: Resume from disk failed.
i have disabled pm for hdd, and in powermanagment i also disabled monitoring.
Why are here white spaces? Where do they come from? These are exactly at the times where i tried to copy my old archiv to the new omv.
What i also donßt understand in log and doesn´t look normal:
Jan 15 11:21:03 OMV-Nas collectd[1539]: uc_update: Value too old: name = localhost/df/df-media-0ddf89e7-030f-40e0-b53f-7c388c687ff0; value time = 1389781263; last cache update = 1389781263;
Jan 15 11:21:03 OMV-Nas collectd[1539]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1.
Jan 15 11:21:03 OMV-Nas monit[1410]: 'localhost' loadavg(5min) of 2.4 matches resource limit [loadavg(5min)>2.0]
Jan 15 11:21:36 OMV-Nas monit[1410]: 'localhost' loadavg(5min) of 2.3 matches resource limit [loadavg(5min)>2.0]
Jan 15 11:22:04 OMV-Nas monit[1410]: 'localhost' loadavg(5min) of 2.4 matches resource limit [loadavg(5min)>2.0]
Jan 15 11:22:04 OMV-Nas monit[1410]: 'localhost' 'localhost' loadavg(1min) check succeeded [current loadavg(1min)=3.3]#012
Jan 15 11:22:34 OMV-Nas collectd[1539]: uc_update: Value too old: name = localhost/load/load; value time = 1389781354; last cache update = 1389781354;
Jan 15 11:22:34 OMV-Nas collectd[1539]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1.
Jan 15 11:22:34 OMV-Nas collectd[1539]: uc_update: Value too old: name = localhost/memory/memory-used; value time = 1389781354; last cache update = 1389781354;
Jan 15 11:22:34 OMV-Nas collectd[1539]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1.
Jan 15 11:22:34 OMV-Nas collectd[1539]: uc_update: Value too old: name = localhost/memory/memory-buffered; value time = 1389781354; last cache update = 1389781354;
Jan 15 11:22:34 OMV-Nas collectd[1539]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1.
Jan 15 11:22:34 OMV-Nas collectd[1539]: uc_update: Value too old: name = localhost/memory/memory-cached; value time = 1389781354; last cache update = 1389781354;
Jan 15 11:22:34 OMV-Nas collectd[1539]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1.
Jan 15 11:22:34 OMV-Nas collectd[1539]: uc_update: Value too old: name = localhost/memory/memory-free; value time = 1389781354; last cache update = 1389781354;
Jan 15 11:22:34 OMV-Nas collectd[1539]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1.
Jan 15 11:22:36 OMV-Nas monit[1410]: 'localhost' loadavg(5min) of 2.5 matches resource limit [loadavg(5min)>2.0]
Jan 15 11:23:07 OMV-Nas monit[1410]: 'localhost' loadavg(5min) of 2.3 matches resource limit [loadavg(5min)>2.0]
Jan 15 11:23:37 OMV-Nas monit[1410]: 'localhost' loadavg(5min) of 2.3 matches resource limit [loadavg(5min)>2.0]
Jan 15 11:24:07 OMV-Nas monit[1410]: 'localhost' loadavg(5min) of 2.3 matches resource limit [loadavg(5min)>2.0]
Jan 15 11:25:54 OMV-Nas kernel: imklog 4.6.4, log source = /proc/kmsg started.
Jan 15 11:25:54 OMV-Nas rsyslogd: [origin software="rsyslogd" swVersion="4.6.4" x-pid="1387" x-info="http://www.rsyslog.com"] (re)start
Jan 15 11:25:54 OMV-Nas kernel: [ 0.000000] Initializing cgroup subsys cpuset
Alles anzeigen
Some Memory Problems or a resource Limit?, followed of an reboot? But why? is that done by a watchdog?
Maybe anyone of you have a suggestion or solution?
Thx