Beiträge von Gwarph

    I need some help here please.


    I'm getting the Software Failure screen when I try to access System -> OMV-Extras or Kernal, and Storage->ZFS.

    I don't know when this started, but I'm replacing a ZFS drive and I tried to enter ZFS from the gui and I get the error.


    I can enter ZFS commands at the CLI, and everything seems to be functioning fine.


    I don't know where to start troubleshooting this so any help is appreciated.


    Thanks.

    Well that was one of the best upgrades ever! Thanks to all who worked hard at achieving this.


    I had one error that I thought I would document for others.


    Last year I replaced my boot drive, a Kingston SV300. The 6.x upgrade threw one error out:


    Code
    Setting up grub-pc (2.04-20) ...
    /dev/disk/by-id/ata-KINGSTON_SV300S37A120G_50026B774A00D337 does not exist, so cannot grub-install to it!
    You must correct your GRUB install devices before proceeding:
    
      DEBIAN_FRONTEND=dialog dpkg --configure grub-pc
      dpkg --configure -a
    dpkg: error processing package grub-pc (--configure):
     installed grub-pc package post-installation script subprocess returned error exit status 1
    Setting up python3-six (1.16.0-2) ...

    I ran:

    Code
    dpkg -D10113 --configure grub-pc

    A popup window asked which drives to install Grub on, I chose the current boot volume (a Kingston SV400) and it completed fine.


    I rebooted (this was the first time trying) and it booted successfully into OMV.

    Update 3:

    I reloaded OMV from scratch.

    Everything works fine until I load my ext4 raid array; at that point quota check starts running and the GUI will hang and eventually error out with a communication error.


    I edited /etc/fstab and removed the aquota parameters, but next boot, OMV mounts them with the quota parameters.


    Anyone have an idea on what I can do next?

    I now cannot save/apply changes to the GUI. I get the following error:


    Error #0:
    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color quota 2>&1' with exit code '1': debian: Data failed to compile:
    ---------- Rendering SLS 'base:omv.deploy.quota.default' failed: while constructing a mapping in "<unicode string>", line 42, column 1
    found conflicting ID 'quota_off_no_quotas_' in "<unicode string>", line 96, column 1 in /usr/share/php/openmediavault/system/process.inc:195
    Stack trace:
    #0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(62): OMV\System\Process->execute()
    #1 /usr/share/openmediavault/engined/rpc/config.inc(167): OMV\Engine\Module\ServiceAbstract->deploy()
    #2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)
    #5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(588): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusLp...', '/tmp/bgoutputnQ...')
    #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))
    #7 /usr/share/openmediavault/engined/rpc/config.inc(189): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)
    #8 [internal function]: Engined\Rpc\Config->applyChangesBg(Array, Array)
    #9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)
    #11 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)
    #12 {main}


    I think I will wipe out my boot volume again and re-load from scratch.

    I did a new install of OMV yesterday on my existing system. I am on 5.6.2.1.


    The GUI is unresponsive - I will get a communication error with messages like:


    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color quota 2>&1' with exit code '1': debian: ---------- ID: quota_off_no_quotas_c75776fd-72f7-470a-8d4f-71bcb020bed3 Function: cmd.run Name: quotaoff --group --user /dev/disk/by-uuid/c75776fd-72f7-470a-8d4f-71bcb020bed3 || true Result: True Comment: Command "quotaoff --group --user /dev/disk/by-uuid/c75776fd-72f7-470a-8d4f-71bcb020bed3 || true" run Started: 07:31:35.131664 Duration: 101.865 ms Changes: ---------- pid: 23113 retcode: 0 stderr: stdout: ---------- ID: quota_check_no_quotas_c75776fd-72f7-470a-8d4f-71bcb020bed3 Function: cmd.run Name: quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/c75776fd-72f7-470a-8d4f-71bcb020bed3 Result: True Comment: Command "quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/c75776fd-72f7-470a-8d4f-71bcb020bed3" run Started: 07:31:35.233957 Duration: 1072980.357 ms Changes: ---------- pid: 23115 retcode: 0 stderr: quotacheck: Scanning /dev/md127 [/srv/dev-disk-by-uuid-c75776fd-72f7-470a-8d4f-71bcb020bed3] quotacheck: Checked 102831 directories and 769655 files stdout: |/-\|/-\|/-\|/-\|/-\|/-\|/

    If I run top is shows quotacheck is running. Once it stops, the GUI works fine.


    I saw that this was fixed in 5.5.13 I believe, in this post "Quota Error when mounting XFS FileSystem"

    What can I do to get the GUI functionality back?


    Thanks

    Gwarph

    Thanks ryecoaaron .


    I installed both the clonezilla and gparted-live from the GUI. Both of them seemed to hang at "creating GRUB entry ...". However, when I rebooted I had the option to select either of them.


    My current system drive was 120 GB so I booted from gparted-live, and shrunk the system drive to 25GB. Rebooted and everything was fine.

    I then booted off of Clonezilla, and followed the prompts to clone my Kingston drive to a USB thumb drive. I booted off the USB (3.0 surprisingly quick) and everything was fine.


    I moved the Kingston drive to another SATA port (my motherboard has an LSI chip so 8 data ports, and I think 4 more on the board - 2 6Gbs and 2 3Gbs). I'm now installing a second SSD out of an old MacBook and my plan is to mirror the system drive on ZFS, and then ZFS the data drives.

    I was trying to undo the bonding of my two nics. I used omv-firstaid to define one Nic and it errored out the first two times. I tired a third time and after about a minute, it returned with success.


    I was able to log into OMV from the gui. I checked the interface entry in config.xml and similarly it shows <bondmode>1</bondmode>. It was previously showing <bondmode>4</bondmode>.

    I know very little (and there is no need for a qualifier!)


    Is the apfd running?
    at the command line enter: ps aux | grep afpd
    This will show you your running processes.
    If it is running I think you should see the daemon, afpd


    If it is running, you could try restarting it: service netatalk restart
    If it is not running, enter ls /etc/init.d
    You should see netatalk listed there as a service.


    I have no idea why OMV doesn't see a service, but the above might help with diagnosing what is going on.

    Zitat

    Separate filesystems on the same disk just seems like a recipe for disaster. Frankly, I can't think of any scenario where it would be even remotely necessary.


    I know nothing (well more than I did this am) about Linux, but in the AIX world, we create multiple fs on VGs all the time. My thought was to have save separate fs so that I can unmount say 'PHOTOS' if I was experimenting with Adobe and have no fear that my data was in danger.


    However, I'm restoring my data and testing the multiple TM backups right now, so all is good.


    Thanks,
    Gwarph

    HI All,
    I am in the middle of moving from FreeNAS to OMV.
    I have installed everything (?).
    I have looked for solutions on the forum but I am no closer to an answer. I would like some idea of best practices in configuring my filesystems.


    I have created a Raid6 array on the data drives (9.1TB)
    I think I want to do the following:
    I want to create a volume(filesystem?) for various Time Machine backups - around 800GB
    I want to then create separate filesystems for Photos, Music, Videos, Users etc.


    When I go to create a filesystem, it does not have a size limit, so will it use up the whole array?


    Any advice on how to set this up correctly the first time would be appreciated.


    Thanks,
    Gwarph