Thanks, I pasted part of database from smartctl drivedb.h with my drive in it and changed parameter "-v 195,raw48,Cumulativ_Corrected_ECC " to "-v 195,raw16:w543210,Bogus_ECC_Attribute ". Now I have status that drive is ok.
Beiträge von karnasw
-
-
So there is no chance to turn off that false nofitication?
-
Ok, wrong docs.
In salt.states.file (saltproject.io) it's name instead o path. Now deploy works.
But drive still have Bad health status (with -i or -I parameter):
-
I created 30micronparameters.sls file:
Codemicron_parameters_replace: file.replace: - path: "/etc/smartd.conf" - pattern: "/dev/disk/by-id/ata-Micron_1100_SATA_256GB_18061B0FF865" - repl: "/dev/disk/by-id/ata-Micron_1100_SATA_256GB_18061B0FF865 -a -I 195" - count: 1 - backslash_literal: True
but when I try to deploy I got error:
Code
Alles anzeigenroot@openmediavault:~# omv-salt deploy run smartmontools Traceback (most recent call last): File "/usr/sbin/omv-salt", line 191, in <module> sys.exit(main()) ^^^^^^ File "/usr/sbin/omv-salt", line 186, in main cli() File "/usr/lib/python3/dist-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/click/core.py", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/click/core.py", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/sbin/omv-salt", line 171, in deploy_run result = caller.cmd("state.orchestrate", names) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/client/__init__.py", line 2174, in cmd return self.sminion.functions[fun](*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 149, in __call__ return self.loader.run(run_func, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 1234, in run return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 1249, in _run_as return _func_or_method(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/modules/state.py", line 356, in orchestrate return _orchestrate( ^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/runners/state.py", line 126, in orchestrate running = minion.functions["state.sls"]( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 149, in __call__ return self.loader.run(run_func, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 1234, in run return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 1249, in _run_as return _func_or_method(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/modules/state.py", line 1479, in sls ret = st_.state.call_high(high_, orchestration_jid) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/state.py", line 3555, in call_high ret = self.call_chunks(chunks) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/state.py", line 2723, in call_chunks running = self.call_chunk(low, running, chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/state.py", line 3245, in call_chunk running[tag] = self.call(low, chunks, running) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/utils/decorators/state.py", line 45, in _func result = func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/state.py", line 2308, in call cdata = salt.utils.args.format_call( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/utils/args.py", line 485, in format_call raise SaltInvocationError(msg) salt.exceptions.SaltInvocationError: 'path' is an invalid keyword argument for 'file.replace' [ERROR ] An un-handled exception was caught by Salt's global exception handler: SaltInvocationError: 'path' is an invalid keyword argument for 'file.replace' Traceback (most recent call last): File "/usr/sbin/omv-salt", line 191, in <module> sys.exit(main()) ^^^^^^ File "/usr/sbin/omv-salt", line 186, in main cli() File "/usr/lib/python3/dist-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/click/core.py", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/click/core.py", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/sbin/omv-salt", line 171, in deploy_run result = caller.cmd("state.orchestrate", names) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/client/__init__.py", line 2174, in cmd return self.sminion.functions[fun](*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 149, in __call__ return self.loader.run(run_func, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 1234, in run return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 1249, in _run_as return _func_or_method(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/modules/state.py", line 356, in orchestrate return _orchestrate( ^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/runners/state.py", line 126, in orchestrate running = minion.functions["state.sls"]( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 149, in __call__ return self.loader.run(run_func, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 1234, in run return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 1249, in _run_as return _func_or_method(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/modules/state.py", line 1479, in sls ret = st_.state.call_high(high_, orchestration_jid) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/state.py", line 3555, in call_high ret = self.call_chunks(chunks) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/state.py", line 2723, in call_chunks running = self.call_chunk(low, running, chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/state.py", line 3245, in call_chunk running[tag] = self.call(low, chunks, running) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/utils/decorators/state.py", line 45, in _func result = func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/state.py", line 2308, in call cdata = salt.utils.args.format_call( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/utils/args.py", line 485, in format_call raise SaltInvocationError(msg) salt.exceptions.SaltInvocationError: 'path' is an invalid keyword argument for 'file.replace'
In docs I can see that path is a valid argument:
-
I tried add -i flag for parameter 195 in /usr/local/etc/smartd.conf and this doesn't make diefference.
I see openmediavault auto generate /etc/smartd.conf, so adding this flag there is pointless.
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
DEFAULT -a -o on -S on -T permissive -W 0,0,0 -n never,q
/dev/disk/by-id/ata-Micron_1100_SATA_256GB_18061B0FF865
-
Hi, I have Wyse 5070 and Micron 1100 SSD. Drive is tested on many application including one from drive manufacturer and everything is OK.
How can I exclude attribute with false positive error from smartctl? Attribute is 195 Cumulativ_Corrected_ECC
Code
Alles anzeigen=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 678) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 4) minutes. Conveyance self-test routine recommended polling time: ( 3) minutes. SCT capabilities: (0x0035) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 5 Reallocate_NAND_Blk_Cnt PO--CK 100 100 010 - 0 9 Power_On_Hours -O--CK 100 100 000 - 1544 12 Power_Cycle_Count PO--CK 100 100 001 - 1700 181 Program_Fail_Cnt_Total PO--CK 100 100 001 - 0 182 Erase_Fail_Count_Total PO--CK 100 100 001 - 0 177 Wear_Leveling_Count PO--CK 092 092 010 - 125 187 Reported_Uncorrect PO--CK 100 100 001 - 0 194 Temperature_Celsius -O---K 056 035 000 - 44 (Min/Max 17/65) 199 UDMA_CRC_Error_Count -O--CK 100 100 000 - 0 238 Unknown_Attribute ----CK 092 092 000 - 8 175 Program_Fail_Count_Chip PO--CK 100 100 000 - 0 176 Erase_Fail_Count_Chip PO--CK 100 100 000 - 0 178 Used_Rsvd_Blk_Cnt_Chip PO--CK 100 100 000 - 0 180 Unused_Reserve_NAND_Blk PO--CK 000 000 000 - 2041 195 Cumulativ_Corrected_ECC POSR-K 100 000 050 Past 0 241 Total_LBAs_Written PO--CK 100 100 000 - 32022891235 242 Total_LBAs_Read PO--CK 100 100 000 - 122543073087 179 Used_Rsvd_Blk_Cnt_Tot PO--CK 100 100 000 - 0 ||||||_ K auto-keep |||||__ C event count ||||___ R error rate |||____ S speed/performance ||_____ O updated online |______ P prefailure warning
-
85%, when I copied more files to drive, than it goes green. Something is wrong.
-
-
Hi, I have weird problem. I have Raspberry Pi 64-bit installed on USB SSD. For "NAS" data I attached 2 USB drives (2.5 HDD and 3.5 HDD (BTRFS both) with external power supply). When I don't open Web Control Panel everything is ok, after sudo reboot command in SSH Pi reboots in few seconds, but when I open Web Control Panel and view Dashboard with drives status or just view filesystems in menu, after sudo reboot command it hangs on A stop job is running /srv/dev-disk-by-uuid ... after few minutes it turns into BTRFS: error (device sdc1) in write_all_supers:3878: errno=-5 IO failure(errors while submitting device barriers.). What can be wrong?
-
Hi, my filesystem mountpoint have "Used" bar in red color. What does it mean? Available space is 845.59 GiB, used 83.90 GiB.
Edit:
When I copied few files to that drive color changed to green. Weird.
-
Hi, I temporary changed owner of all files to pi:pi and changed all folder rights with chmod -R 770 and containers have access to host file but configs are broken, so I have to configure everything from the beginning.
So everything is OK.
-
Hi, I moved from OMV5 to OMV6. Installed Docker, created user which belongs to group (adm, root, users, sudo) and its PID = 1001 and GID = 100 (users).
Containers have mapped volumes to my old containers config, but containers don't read them. Also haven't access to write files on drive. "chown -R user:group" didn't helped. Owner and group have r/w rights.
-
My enclosure have External Power Supply.
-
Hi, I have problem with my HDD mounted in ASM1153 enclosure on my Pi 4B. It freezes after random time. I can't check S.M.A.R.T. in OMV or read any file in SMB or mount point in OS. I have to reboot Pi and then it works for a while and then again freezes. How can I check where is the problem? Because of that my OS shuts down and turns on few minutes.
I tried to change firmware (ASMT-2115) in enclosure, but I always get fail status in ASMedia MP Tool. Should I try another enclosure? Which one?
HDD have no sector errors.
-
Hi, I configured backup on my Pi with OMV, but it's very slow. Normally files write on this HDD with speed 100MB/s (SMB or SSD to HDD copy) but DD creates backup with speed 7.4MB/s. Is it normal?
Also when I run backup with cron it takes 100% cpu and can't do anything on device.
-
I have the same problem, did you found a solution?
-
I've done that before, it's working until I update gravity, then owner changes to openmediavault-webgui and group to spi.
-
Ok, I have another problem, can't add/remove/update any adlist, because pihole hasn't permission to file gravity.db, it keeps changing to owner: openmediavault-webgui and group: spi (from owner pi and group pi), after gravity update, How can I prevent change to this owner and group?
-
Zitat
(because your NAS do not have a WEBGUI and a explorer)
I have Raspberry Pi Desktop and OMV on it (it's working ok). I can access any site, Portainer, OMV GUI and other sites, but not pihole ( http://192.168.162.11/admin)
-
Hi, earlier I had pihole setup like on first post. Now I tried solution with macvlan, it's working but I haven't access to WebGui from Raspberry itself (it's working on other devices in network). What should I change?
My macvlan configuration:
IPV4 Subnet - 192.168.162.0/24 IPV4 Gateway - 192.168.162.1
IPV4 IP range - 192.168.162.11/28 IPV4 Excluded Ips
and connection to container
Container Name IPv4 Address
pihole 192.168.162.11/24