SMB fails after ZFS drive replacement

  • Hello OMV!


    I had to replace a dying drive and re-silver my ZFS pool. This was successful.


    When I look at the files on the Linux side, everything works as expected. I can see the files through any of the web-based services I have running in docker containers (PLEX and Duplicati work just fine.) I just cannot connect to my server through SMB.


    When I try to connect using my Mac through any of my credentials, I get "Connection Failed"


    When I then try to re-save any of my share credentials in OMV -> Storage -> Shared Folders, I get this message:

    Code
    500 - OK Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color samba 2>&1' with exit code '1': lotus: ---------- ID: configure_samba_global Function: file.managed Name: /etc/samba/smb.conf Result: True Comment: File /etc/samba/smb.conf is in the correct state Started: 22:48:06.464414 Duration: 35.757 ms Changes: ---------- ID: configure_samba_shares Function: file.append Name: /etc/samba/smb.conf Result: False Comment: An exception occurred in this state: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/salt/utils/templates.py", line 497, in render_jinja_tmpl output = template.render(**decoded_context) File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 1090, in render self.environment.handle_exception() File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 832, in handle_exception reraise(*rewrite_traceback_stack(source=source)) File "/usr/lib/python3/dist-packages/jinja2/_compat.py", line 28, in reraise raise value.with_traceback(tb) File "<template>", line 36, in top-level template code File "/usr/lib/python3/dist-packages/jinja2/sandbox.py", line 465, in call ...


    I am running on OMV on a dedicated Intel Xeon server using proxmox kernel Linux 5.19.17-1-pve.


    systemctl status smbd.service


    shows the following:



    Please help. Thank you!

  • SOLVED!


    When I resilvered the ZFS pool, the UUID changed.


    config.xml reflected the change of the UUID in the <fstab><mntent> block, but NOT in the <shares><sharedfolder><mntentref> block!


    So, to fix this problem in the future, do the following:


    sudo nano /etc/openmediavault/config.xml


    Scroll to the <mntent> block for the ZFS pool. In my case, the pool's name is tank. <fsname>tank</fsname>


    Note the UUID. Mine is ae250631-f8c5-4d7d-9f7d-b1251c7e19a0.


    Scroll down to the <shares> block and find ALL <sharedfolder> blocks connected to the ZFS pool. Comment out the old <mntentref> entry and replace it with the correct new UUID. Note that the <sharedfolder> was referencing the defunct 64951ccf-cf89-4990-a9ea-0ca93b995745 UUID.


    Here is one example for my photos share:


    Code
             <shares>
                <sharedfolder>
                    <uuid>5eea8d41-0dc2-4583-9c40-e717c08b337a</uuid>
                    <name>photos</name>
                    <comment></comment>
                    <mntentref>ae250631-f8c5-4d7d-9f7d-b1251c7e19a0</mntentref>
                    <!-- <mntentref>64951ccf-cf89-4990-a9ea-0ca93b995745</mntentref> -->


    Save, Reboot, and Enjoy!

  • jcatlanta

    Hat das Label gelöst hinzugefügt.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!