Beiträge von kcallis

    Could you elaborate a bit on how you implemented that? Did you need any additional packages, what exactly did you change?


    There doesn't seem to be any doc available, and starting it results in a error about apcaccess or something.


    EDIT: found it just required some extra software (apt install apcupsd) then it works, or remove the ups interface file.

    I tried to start up pymonitor and received the following error:


    Code
    /root/src/pymonitor/./pymonitor.py:3: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
      import os, re, imp

    So what is the correct way to get this up and running?

    I have downloaded the PVE kernel headers, but how do I get the full kernel source. I am trying to get a couple of patches working, but it has been eons since I have tried to compile a kernel, so back to square one again.

    I am constantly running into issue with the web interface. I have installed using a USB thumb drive as boot. I have zraid-1 and seemingly they are working. I connect to omv server via ssh as well as the web interface. Although my SSH connection stays connected, every time that I use the web interface, about 2 minutes after login, I get an error message "Software Failure. Press Left button to continue". When I click they button, I am back to the dashboard, but about 30 seconds later, I am back to the same error.


    I have updated to the latest and still no joy! Is this an issue that I having or has anyone run into problem.

    I replaced the dom with an SSD which will fit easily on top of the drive cage in a closed enclosure. My N5550 has 4GB of memory and runs OMV6 without issue. Load is minimal, cpu can handle it plenty. Also have the HDD leds and lcd display functional, which is real nice.

    What did you have to do get the LEDs and display working? Although I am sure that my display is shot. The backlighting on the display seemingly does not work!

    Uggg!!! So I was all ready to put my drives in the caddy. I pulled screws from another enclosure that I was using. Of course, the screws were longer than I needed, so when you install the drives into the caddy, the head of the screw is pressing down on the drive below. This won't allow me to install the last drive. Does anyone happen to know what is the correct size for the screws? I am thinking M2, but I could be wrong.


    I still have not figured out how to make use of the mSATA card. I have found many web pages that note the existence of the mSATA port, but I have not any information about making use of the mSATA drive. I ordered a 64GB SATA-DOM device, but won't be here for another couple of days. I thought that I could make use of the mSATA as the boot drive and configure the OMV and then move the configuration onto the SATADOM when it comes here.


    I tried to use the original firmware, but that did not go well. I only installed one drive and after I started NAS, after a couple of minutes, there was an alarm. I could see that it was complaining about the RAID (understandably so, since I only had one drive in the bay), but I could not get access to the web interface, and the device would not get a DHCP address. I know that the interfaces work because I did make use of a rescue USB and the interfaces responded properly).

    I am having an issue installing on a mSATA drive. I decided to use a 512gb mSATA drive as the root file system. I made boot order changes to the BIOS, but noticed that the BIOS did not see the mSATA drive.


    I booted off the USB stick and did not see the drive when at the section for partioning. If there a BIOS setting to make the drive visible or is there an issue with the drive?

    How are people using RAID setup? Since I am moving away from using the factory OS, I started thing about my RAID options.


    I could just do a mdraid 5 and call it a day. I could use mergerfs and snapraid still with a 4 data drive/1 parity. Or I could try zraid-z1 (but I think that the limit RAM would be a deterrent!).


    Since I have a 4 drive enclosure that I am connecting via esata, I am thinking at least doing snapraid on those drives, but still kicking ideas for the primary drives.


    Any suggestions?

    I had forgotten about the N5550, but glad that it is back on my radar. I tried my hand on a vitualized OMV under Proxmox, but too much load.


    So I have a N5550 coming next week and I do have a couple of questions. I have been looking at a DOM replacement and wondering what is the small sized Dom I used purchase. I was thinking 32G, but I have seen decent prices at the 64G and even 128GB.


    I am just looking for storage, so no Docker, zfs, etc. How much load on the system?

    SnapRaid and mergerfs are different things and do not interoperate with each other. So if you aren't using mergerfs you don't need to pay attention to suggestions about its use.


    Some programs that can be run in dockers have /config container side directories that will cause a lot of grief for SnapRaid unless they are excluded. One example is Plex's /config. Another is Smokeping. And there are many more. Rather than discover these one by one and exclude them one by one, I found it easier to put these all in one folder, confined to one drive, and exclude that folder. This leaves them all unprotected by SnapRaid, but I can live with that.

    So for instance, my appdata currently resides in /srv/dev-disk-by-uuid-1d060b23-6aec-4855-85e7-c0cb380c0d19/appdata. I need to exclude that folder in /etc/snapraid.conf and also directly reference the full path when I am configuring a container (ie: no symlinks).

    I have tried to use a symlink, but with mixed results. Sometime, I can (for instance) use as my host link, /srv/appdata/foo (which is symlinked to/srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e/appdata/) --> /config and it works just fine. On the other hand, on some containers, I have to do the full path /srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e/appdata/bar --> /config or even /srv/dev-disk-by-uuid-1d060b23-6aec-4855-85e7-c0cb380c0d19/appdata/bar --> /config. Each container acts a little different!


    Interestingly enough, I have never created a mergerfs folder. When looking at the tutorials on using snapraid, I never saw an example which use mergerfs. I did read about exempting directories in the /etc/snapraid.conf file (which I didn't know about) to protect the appdata folder.

    So I have removed all of my shared folders because I wanted to simplify the structure. I am still perplexed by the action of using the path. For instance (also in the above posting), I have a directory in the pool called appdata (/srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e/appdata), appdata is actually located at /srv/dev-disk-by-uuid-1d060b23-6aec-4855-85e7-c0cb380c0d19/appdata.


    When I am spinning up a container, I try to use the following:


    Code
    /srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e/appdata -> /config

    I start the container, but even though it says that it running, I am not able to access the container. If I change the container with the following:

    Code
    /srv/dev-disk-by-uuid-1d060b23-6aec-4855-85e7-c0cb380c0d19/appdata -> /config

    re-deploy the container, I am able to access the container and life is wonderful.


    I have checked the permissions and everything is go, so I wondering why I can't use the shorter name?

    Normally ZFS is my go to, considering that I am just using OMV strictly for media purposes, it is not important to have ZFS in play. On the other hand, snapraid is another issue. I know that somewhere down the line, snapraid will be working in the future, right now it is on the pipeline. So that is a major reason to stay with OMV 5 for the time being.

    How to do I get rid of shared folders under snapraid? I removed all of the directories from the cli, but when I look at Shared Folder, it still show the folders and I am not able to remove them. I thought that shutting down all of the containers would release the folders (or at least the pointers, since the directories are gone), but no dice. I have turned off all file system controls, like CIFS, NFS, etc. and still can't delete the shared folders.

    I believe that I am ready to re-install once again, now that I have a better understanding of things. I am running under Proxmox and OMV 5 has worked pretty well. I had initially upgraded to OMV 6 (and the performance was not bad at all), but degraded because I felt that I needed the proxmox kernel (because I wanted to make use of ZFS). After downgrading, I realized that I because of the drive enclosure that I was using (a USB3/eSATA enclosure), I found that I was not going to be able to use ZFS. Nevertheless, using snapraid I was able to make use of my enclosure and life was good with setting up docker.


    So I decided that I wanted to blow away my currently layout and start anew. Since there hasn't been any real issue with using OMV 6 (or so I have read), is there any pitfalls if I opt to upgrade to OMV 6? I know that I will not be able to Proxmox kernel, but considering that I am not making use of ZFS, am I losing any performance using the vanilla kernel? I figure I could start setting up OMV 6 so that when then time moves from Alpha/Beta version to mainstream I am ready for the roll out?


    Any thoughts about that or should I say with OMV 5 and wait?

    It has been a couple of weeks, but I have come on another issue. Or maybe it is not an issue and I am somewhat confused. I make use of Portainer to handle all of my docker containers (most of the time... Sometimes I need to just start one up on the command line).


    Code
    root@nas-01:/srv# ls -l
    total 40
    drwxr-xr-x  4 root root    4096 Oct  2 23:37 1abd74ac-84b5-4f06-a458-d5d87ecd6e1e
    drwxr-xr-x  5 root root    4096 Oct  3 08:57 dev-disk-by-label-media
    drwxr-xr-x 10 root root    4096 Oct  6 12:55 dev-disk-by-uuid-1d060b23-6aec-4855-85e7-c0cb380c0d19
    drwxr-xr-x  4 root root    4096 Oct  2 23:37 dev-disk-by-uuid-2e164b03-9001-464b-bdb7-fef2a0b05ff1
    drwxr-xr-x  3 root root    4096 Sep 10 00:50 dev-disk-by-uuid-a3f78985-a6ea-43b3-8753-895c0e249b15
    drwxr-xr-x  8 root root    4096 Oct  2 23:37 dev-disk-by-uuid-cbf8c644-5871-4932-ab87-382b311cb786
    drwxr-xr-x  5 root root    4096 Oct  2 23:37 dev-disk-by-uuid-d02056de-d1d8-4046-bb52-2884b2847bfb

    When I try to for instance, bind (host) /srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e/Configs to (container) /config, I tend to have some issue. On the other hand, if I bind /dev-disk-by-uuid-1d060b23-6aec-4855-85e7-c0cb380c0d19/Configs to /config, life is groovy. Am I suppose to use the latter method or did make a mistake in my configuration of snapraid? I would think that I should have to bind from /srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e and not be concerned with where the directory is located on the drives. I find myself having to constantly ssh into the host to get the correct directory to Configs (ie /srv/dev-disk-by-uuid-1d060b23-6aec-4855-85e7-c0cb380c0d19/Configs) as opposed to /srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e/Configs.


    Of course, this could just be my misinterpretation and I am not understanding correctly. Any pointers would be greatly appreciated!

    Thanks for the pointers to the Extras files. I am wondering if my use of trying to make use of the pool directory might be causing the issues.


    In attempting to create a docker storage, I created a directory in pool-01. I am thinking that maybe I should just create a directory in one of the uid devices, because when synced, the docker storage will be placed in the pool-01. Just a thought...