Beiträge von piet
-
-
But maybe it is a bug ?
I tried with the last armbian os version, proposed with this motherboard (Helios 4) which is Armbian_24.2.1_Helios4_bookworm_current_6.6.16_minimal.img.xz (OMV 7 and no OMV 6 here), but still the same error.
What is the maximum size of a disk that we can use with a 32 bits. I know there is limitation with ram but with disk ....?
Or a disk size's limitation with the ARM V7 processor ?
-
-
Thank you for your answer.
System is armv7l, so 32 bits
Size of 3 disks : 14TB each
And the result of the command ls (one 4TB disk alone, and 3 disks of 14 To on linear config).
Coderoot@helios4:/home/ubuntu# ls -al /srv/ total 24 drwxr-xr-x 6 root root 4096 5 fév 15:46 . drwxr-xr-x 20 root root 4096 1 jui 2024 .. drwxr-xr-x 4 root root 4096 5 fév 15:39 dev-disk-by-uuid-9048fa90-bb40-414d-b195-6ba86bf6077d drwxrwxrwx 2 root root 4096 5 fév 15:46 dev-disk-by-uuid-e2a2a201-e85c-4b28-9571-ea6b2c2bd6c6 drwxr-xr-x 3 root root 4096 1 jui 2024 pillar drwxr-xr-x 7 root root 4096 1 jui 2024 salt
-
-
Helllo.
Here is the full message:
ZitatFailed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color fstab 2>&1' with exit code '1': helios4:
----------
ID: create_filesystem_mountpoint_037cf2d6-559a-4b0d-8411-8f3c6fe60cfa
Function: file.accumulated
Result: True
Comment: Accumulator create_filesystem_mountpoint_037cf2d6-559a-4b0d-8411-8f3c6fe60cfa for file /etc/fstab was charged by text
Started: 22:06:54.481561
Duration: 2.918 ms
Changes:
----------
ID: mount_filesystem_mountpoint_037cf2d6-559a-4b0d-8411-8f3c6fe60cfa
Function: mount.mounted
Name: /srv/dev-disk-by-uuid-9048fa90-bb40-414d-b195-6ba86bf6077d
Result: True
Comment: Target was already mounted
Started: 22:06:54.488376
Duration: 121.92 ms
Changes:
----------
umount:
Forced remount because options (acl,user_xattr) changed
----------
ID: create_filesystem_mountpoint_e0cfe461-0515-4d64-b394-1d855452f615
Function: file.accumulated
Result: True
Comment: Accumulator create_filesystem_mountpoint_e0cfe461-0515-4d64-b394-1d855452f615 for file /etc/fstab was charged by text
Started: 22:06:54.611378
Duration: 3.627 ms
Changes:
----------
ID: mount_filesystem_mountpoint_e0cfe461-0515-4d64-b394-1d855452f615
Function: mount.mounted
Name: /srv/dev-disk-by-uuid-e2a2a201-e85c-4b28-9571-ea6b2c2bd6c6
Result: False
Comment: mount: /srv/dev-disk-by-uuid-e2a2a201-e85c-4b28-9571-ea6b2c2bd6c6: mount(2) system call failed: File too large.
Started: 22:06:54.615529
Duration: 122.342 ms
Changes:
----------
ID: append_fstab_entries
Function: file.blockreplace
Name: /etc/fstab
Result: True
Comment: No changes needed to be made
Started: 22:06:54.741659
Duration: 12.118 ms
Changes:
Summary for helios4
------------
Succeeded: 4 (changed=1)
Failed: 1
------------
Total states run: 5
Total run time: 262.925 ms
[ERROR ] Command 'mount' failed with return code: 32
[ERROR ] stderr: mount: /srv/dev-disk-by-uuid-e2a2a201-e85c-4b28-9571-ea6b2c2bd6c6: mount(2) system call failed: File too large.
[ERROR ] retcode: 32
[ERROR ] mount: /srv/dev-disk-by-uuid-e2a2a201-e85c-4b28-9571-ea6b2c2bd6c6: mount(2) system call failed: File too large.
OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color fstab 2>&1' with exit code '1': helios4:
----------
ID: create_filesystem_mountpoint_037cf2d6-559a-4b0d-8411-8f3c6fe60cfa
Function: file.accumulated
Result: True
Comment: Accumulator create_filesystem_mountpoint_037cf2d6-559a-4b0d-8411-8f3c6fe60cfa for file /etc/fstab was charged by text
Started: 22:06:54.481561
Duration: 2.918 ms
Changes:
----------
ID: mount_filesystem_mountpoint_037cf2d6-559a-4b0d-8411-8f3c6fe60cfa
Function: mount.mounted
Name: /srv/dev-disk-by-uuid-9048fa90-bb40-414d-b195-6ba86bf6077d
Result: True
Comment: Target was already mounted
Started: 22:06:54.488376
Duration: 121.92 ms
Changes:
----------
umount:
Forced remount because options (acl,user_xattr) changed
----------
ID: create_filesystem_mountpoint_e0cfe461-0515-4d64-b394-1d855452f615
Function: file.accumulated
Result: True
Comment: Accumulator create_filesystem_mountpoint_e0cfe461-0515-4d64-b394-1d855452f615 for file /etc/fstab was charged by text
Started: 22:06:54.611378
Duration: 3.627 ms
Changes:
----------
ID: mount_filesystem_mountpoint_e0cfe461-0515-4d64-b394-1d855452f615
Function: mount.mounted
Name: /srv/dev-disk-by-uuid-e2a2a201-e85c-4b28-9571-ea6b2c2bd6c6
Result: False
Comment: mount: /srv/dev-disk-by-uuid-e2a2a201-e85c-4b28-9571-ea6b2c2bd6c6: mount(2) system call failed: File too large.
Started: 22:06:54.615529
Duration: 122.342 ms
Changes:
----------
ID: append_fstab_entries
Function: file.blockreplace
Name: /etc/fstab
Result: True
Comment: No changes needed to be made
Started: 22:06:54.741659
Duration: 12.118 ms
Changes:
Summary for helios4
------------
Succeeded: 4 (changed=1)
Failed: 1
------------
Total states run: 5
Total run time: 262.925 ms
[ERROR ] Command 'mount' failed with return code: 32
[ERROR ] stderr: mount: /srv/dev-disk-by-uuid-e2a2a201-e85c-4b28-9571-ea6b2c2bd6c6: mount(2) system call failed: File too large.
[ERROR ] retcode: 32
[ERROR ] mount: /srv/dev-disk-by-uuid-e2a2a201-e85c-4b28-9571-ea6b2c2bd6c6: mount(2) system call failed: File too large. in /usr/share/php/openmediavault/system/process.inc:242
Stack trace:
#0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(62): OMV\System\Process->execute()
#1 /usr/share/openmediavault/engined/rpc/config.inc(178): OMV\Engine\Module\ServiceAbstract->deploy()
#2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array)
#3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)
#5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(620): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusPn...', '/tmp/bgoutputIy...')
#6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))
#7 /usr/share/openmediavault/engined/rpc/config.inc(199): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)
#8 [internal function]: Engined\Rpc\Config->applyChangesBg(Array, Array)
#9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
#10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)
#11 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)
#12 {main}
-
Hello.
I'm sorry but I don't find any solution.
I format, recreate the raid, reinstall OMV, read some threads on the forum, but I still have the same error message when I want to mount the file system.
If someone have a idea to fix it, it'll be great.
Thank you.
Best regards.
Pierre
Hostname
helios4Version
6.9.16-1 (Shaitan)Processor
ARMv7 Processor rev 1 (v7l)Kernel
Linux 5.15.93-mvebuSystem time
Wed Feb 5 18:30:44 2025Error logs :
Code500 - Internal Server Error Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color fstab 2>&1' with exit code '1': helios4: ---------- ID: create_filesystem_mountpoint_646a8cd1-3b48-44ea-9dc1-537a82f9000e Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_646a8cd1-3b48-44ea-9dc1-537a82f9000e for file /etc/fstab was charged by text Started: 18:25:28.607623 Duration: 4.365 ms Changes: ---------- ID: mount_filesystem_mountpoint_646a8cd1-3b48-44ea-9dc1-537a82f9000e Function: mount.mounted Name: /srv/dev-disk-by-uuid-e2a2a201-e85c-4b28-9571-ea6b2c2bd6c6 Result: False Comment: mount: /srv/dev-disk-by-uuid-e2a2a201-e85c-4b28-9571-ea6b2c2bd6c6: mount(2) system call failed: File too large. Started: 18:25:28.618016 Duration: 194.488 ms Changes: ---------- ID: create_filesystem_mountpoint_71359993-cfbb-4a90-a425-246f1da1218a Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_71359993-cfbb-4a90-a425-246f1da1218a for file /etc/fstab was charged by text Started: 18:25:28.813259 Duration: 3.405 ms Changes: ---------- ID: mount_filesystem_mountpoint_71359993-cfbb-4a90-a425-246f1da1218a Function: mount.mounted Name: /srv/dev-disk-by-uuid-9048fa90-bb40-414d-b195-6ba86bf6077d Re...
-
Ok, so simple. Thank you very much.
Have a nice day.
-
In English ?
-
Hello.
I created a raid 5 with 5 drives, and when I applied this configuration (Pending configuration changes) > I have a error message (see on the bottom of this thread).
But apparently the Raid 5 is resyncing great.
And now I empty cache, reboot the NAS, etc etc but I still have this error messsage. Don't why this appeared, and how to fix this problem...
If someone have a idea..
Thank you very much.
Pierre.
System Information
Version : 7.2.1-1 (Sandworm)
Processor :Intel(R) N100
Kernel :Linux 6.1.0-21-amd64
System time:Wed 19 Jun 2024 09:26:53 AM CEST
Code500 - Internal Server Error Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color mdadm 2>&1' with exit code '1': debian: Data failed to compile: ---------- Rendering SLS 'base:omv.deploy.mdadm.20mdadm' failed: Jinja error: Object of type StrictUndefined is not JSON serializable Traceback (most recent call last): File "/usr/lib/python3/dist-packages/salt/utils/templates.py", line 477, in render_jinja_tmpl output = template.render(**decoded_context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 1301, in render self.environment.handle_exception() File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 936, in handle_exception raise rewrite_traceback_stack(source=source) File "<template>", line 43, in top-level template code File "/usr/lib/python3/dist-packages/salt/utils/jinja.py", line 1003, in format_json json_txt = salt.utils.json.dumps( ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/utils/json.py", line 170, in dumps return json_module.dumps(obj, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/json/__init__.py", line 238, in dumps **kw).encode(obj) ^^^^^^^^^^^ File "/usr/lib/python3.11/json/encoder.py", line 200, in encode chunks = self.iterencode(o, _one_shot=True) ^^^...
-
Oups.
Thank you very much for our answers.
After installing it, it's working great now!
Best regards.
-
Hello.
I want to test the new 7 version of OMV (7.0-20 SandWorm).
So I used VMware Workstation 17.5 Pro on Windows 10.
No problem during the installation but I can't see submenu "Software RAID" in the menu "Storage".
How can I fix this problem ? Is it on an other place ?
Thank you.
Best regards.
Piet.
-
Hello.
I don't find any information about a way to fix error from a BTRFS scrub status.
Code
Alles anzeigenroot@omv68:/# btrfs scrub status /dev/md0 UUID: 52ebb5b9-f475-4759-b4c4-3b8ca97245fa Scrub started: Thu Sep 28 08:44:00 2023 Status: finished Duration: 213:28:50 Total to scrub: 43.47TiB Rate: 59.11MiB/s Error summary: csum=243 Corrected: 0 Uncorrectable: 243 Unverified: 0 root@omv68:/# sudo btrfs device stats --reset /dev/md0 [/dev/md0].write_io_errs 0 [/dev/md0].read_io_errs 0 [/dev/md0].flush_io_errs 0 [/dev/md0].corruption_errs 0 [/dev/md0].generation_errs 0 root@omv68:/# btrfs scrub status /dev/md0 UUID: 52ebb5b9-f475-4759-b4c4-3b8ca97245fa Scrub started: Thu Sep 28 08:44:00 2023 Status: finished Duration: 213:28:50 Total to scrub: 43.47TiB Rate: 59.11MiB/s Error summary: csum=243 Corrected: 0 Uncorrectable: 243 Unverified: 0
I found some info here : 47237-solved-how-to-fix-btrfs-scrub-errors/
But not a way to fix it... And is there a way to fix it
?
Thank you.
Best regards.
Piet
-
Oh no! Stupid am I.
I thinked that it was enbled by default.
So everything is working fine now. And OMV 5 is a really excellent product.
Thank you.
Regard.
Pierre.
ps: is there a way to help the project? Because we use OMV since several years, for free. And with great help also for free. So it is time to help (donation for the project, work, ...)
-
Hello,
we use OMV in the Helios64 from Kobol (which is a great product by the way) with OMV 5.5.18-1 (Usul)
Everything are running fine, except the stats. We always have no more than 2 minutes....!
And no stats at all for the network and for the raid 5 (but ok for the disk where the OS was installed). See my screenshots in attachement for more understanding.
Is there a way to fix this ?
Thank you.
Regard.
-
Just compare the backup and the original file with Nakivo.
And all the tests are ok. No data corrupted.
Try it please, and tell us your opinion after that. -
Hello @Enwood, here is a solution:
https://forum.openmediavault.o…12To-create-4-partitions/
The solution is first to use the dd command, like this :
Regard.
-
OK.
And for Synology it was a "classic" mode, raid 1 with btrs format. But Synology maybe write other partition or strange, hiden things. It is possible.
Regard
-
Yes, but now there is no data at all, isn't it ? (after create raid and format filesystem in ext4)
-
Yes I mean delete the post number 24.