Beiträge von piet

    But maybe it is a bug ?


    I tried with the last armbian os version, proposed with this motherboard (Helios 4) which is Armbian_24.2.1_Helios4_bookworm_current_6.6.16_minimal.img.xz (OMV 7 and no OMV 6 here), but still the same error.


    What is the maximum size of a disk that we can use with a 32 bits. I know there is limitation with ram but with disk ....?


    Or a disk size's limitation with the ARM V7 processor ?

    Thank you for your answer.


    System is armv7l, so 32 bits



    Size of 3 disks : 14TB each


    And the result of the command ls (one 4TB disk alone, and 3 disks of 14 To on linear config).

    Code
    root@helios4:/home/ubuntu# ls -al /srv/
    total 24
    drwxr-xr-x  6 root root 4096  5 fév 15:46 .
    drwxr-xr-x 20 root root 4096  1 jui  2024 ..
    drwxr-xr-x  4 root root 4096  5 fév 15:39 dev-disk-by-uuid-9048fa90-bb40-414d-b195-6ba86bf6077d
    drwxrwxrwx  2 root root 4096  5 fév 15:46 dev-disk-by-uuid-e2a2a201-e85c-4b28-9571-ea6b2c2bd6c6
    drwxr-xr-x  3 root root 4096  1 jui  2024 pillar
    drwxr-xr-x  7 root root 4096  1 jui  2024 salt

    Helllo.


    Here is the full message:


    Hello.


    I'm sorry but I don't find any solution.

    I format, recreate the raid, reinstall OMV, read some threads on the forum, but I still have the same error message when I want to mount the file system.

    If someone have a idea to fix it, it'll be great.


    Thank you.

    Best regards.

    Pierre



    Hostname
    helios4

    Version
    6.9.16-1 (Shaitan)

    Processor
    ARMv7 Processor rev 1 (v7l)

    Kernel
    Linux 5.15.93-mvebu

    System time
    Wed Feb 5 18:30:44 2025


    3r58qaG.png


    Error logs :

    Code
     500 - Internal Server Error
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color fstab 2>&1' with exit code '1': helios4: ---------- ID: create_filesystem_mountpoint_646a8cd1-3b48-44ea-9dc1-537a82f9000e Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_646a8cd1-3b48-44ea-9dc1-537a82f9000e for file /etc/fstab was charged by text Started: 18:25:28.607623 Duration: 4.365 ms Changes: ---------- ID: mount_filesystem_mountpoint_646a8cd1-3b48-44ea-9dc1-537a82f9000e Function: mount.mounted Name: /srv/dev-disk-by-uuid-e2a2a201-e85c-4b28-9571-ea6b2c2bd6c6 Result: False Comment: mount: /srv/dev-disk-by-uuid-e2a2a201-e85c-4b28-9571-ea6b2c2bd6c6: mount(2) system call failed: File too large. Started: 18:25:28.618016 Duration: 194.488 ms Changes: ---------- ID: create_filesystem_mountpoint_71359993-cfbb-4a90-a425-246f1da1218a Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_71359993-cfbb-4a90-a425-246f1da1218a for file /etc/fstab was charged by text Started: 18:25:28.813259 Duration: 3.405 ms Changes: ---------- ID: mount_filesystem_mountpoint_71359993-cfbb-4a90-a425-246f1da1218a Function: mount.mounted Name: /srv/dev-disk-by-uuid-9048fa90-bb40-414d-b195-6ba86bf6077d Re... 

    Hello.


    I created a raid 5 with 5 drives, and when I applied this configuration (Pending configuration changes) > I have a error message (see on the bottom of this thread).


    But apparently the Raid 5 is resyncing great.


    And now I empty cache, reboot the NAS, etc etc but I still have this error messsage. Don't why this appeared, and how to fix this problem...


    If someone have a idea..


    Thank you very much.

    Pierre.


    System Information

    Version : 7.2.1-1 (Sandworm)

    Processor :Intel(R) N100

    Kernel :Linux 6.1.0-21-amd64

    System time:Wed 19 Jun 2024 09:26:53 AM CEST



    Code
     500 - Internal Server Error
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color mdadm 2>&1' with exit code '1': debian: Data failed to compile: ---------- Rendering SLS 'base:omv.deploy.mdadm.20mdadm' failed: Jinja error: Object of type StrictUndefined is not JSON serializable Traceback (most recent call last): File "/usr/lib/python3/dist-packages/salt/utils/templates.py", line 477, in render_jinja_tmpl output = template.render(**decoded_context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 1301, in render self.environment.handle_exception() File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 936, in handle_exception raise rewrite_traceback_stack(source=source) File "<template>", line 43, in top-level template code File "/usr/lib/python3/dist-packages/salt/utils/jinja.py", line 1003, in format_json json_txt = salt.utils.json.dumps( ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/salt/utils/json.py", line 170, in dumps return json_module.dumps(obj, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/json/__init__.py", line 238, in dumps **kw).encode(obj) ^^^^^^^^^^^ File "/usr/lib/python3.11/json/encoder.py", line 200, in encode chunks = self.iterencode(o, _one_shot=True) ^^^... 

    Hello.


    I don't find any information about a way to fix error from a BTRFS scrub status.




    I found some info here : 47237-solved-how-to-fix-btrfs-scrub-errors/


    But not a way to fix it... And is there a way to fix it :/ ?


    Thank you.


    Best regards.


    Piet

    Oh no! Stupid am I.

    I thinked that it was enbled by default.


    So everything is working fine now. And OMV 5 is a really excellent product.


    Thank you.


    Regard.


    Pierre.



    ps: is there a way to help the project? Because we use OMV since several years, for free. And with great help also for free. So it is time to help (donation for the project, work, ...)

    Hello,


    we use OMV in the Helios64 from Kobol (which is a great product by the way) with OMV 5.5.18-1 (Usul)


    Everything are running fine, except the stats. We always have no more than 2 minutes....! :/


    And no stats at all for the network and for the raid 5 (but ok for the disk where the OS was installed). See my screenshots in attachement for more understanding.



    Is there a way to fix this ?


    Thank you.


    Regard.