Posts by bbddpp

    I'm an idiot. That's what happens when you spend your day totally rebuilding your server. You get dumb after 8 hours.


    I was forgetting the "export" in the path. That fixed Kodi right up for me. Kodi still refuses to browse the server at the root level when I add the source in the GUI, but adding the source manually in sources.xml works perfectly.


    I'm not sure if NFS is still the best solution for streaming in home these days, but my variety of Kodi boxes (openelec, nvidia shield, PCs, etc) all seem to agree on NFS so it's just easier.

    So I've got NFS set up and everything as perfect as can be under OMV 3.0 and my darn Apple OSX Mac Mini just refuses to browse the NFS shares.


    I've tried all the uid and pid stuff on the shares. My last 2 tries were:


    subtree_check,all_squash,insecure,anongid=100,anonuid=0
    subtree_check,all_squash,insecure


    With no luck. Kodi refuses to browse the NFS server at all, and won't even use the manual adds of shares I added to the sources.xml.


    When I try and just browse to nfs://192.168.X.XXX on the Mac, it says "You do not have permission to access this server".


    So, I'm stumped. I know NFS doesn't have users and passwords, but it's obvious there needs to be some sort of synchonized user account or LAN setting or something here somewhere on the Mac and/or in OMV to get this working.


    I read somewhere that I should just use SMB, which works great in Kodi on my Mac (where I do all my Kodi Library work)...But I seem to always recall that NFS had a much better reputation for speed when it comes to streaming?


    Every other machine in my house, from Kodi on android, to Kodi on Windows 10, can see OMV's NFS shares no problem. It's just the OSX machine that doesn't connect to them, and that's the most important one! Has Apple just borked up NFS?


    Anyone else solved the NFS problem on an OSX machine to talk to OMV NFS shares?

    Check this out from mysql error.log



    161203 15:53:50 [Note] - '127.0.0.1' resolves to '127.0.0.1';
    161203 15:53:50 [Note] Server socket created on IP: '127.0.0.1'.
    161203 15:53:50 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist
    161203 15:53:50 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
    161203 15:56:30 mysqld_safe Starting mysqld daemon with databases from /media/3ed4d2b9-03c0-4b70-b850-82df1e1757f7/SQL
    161203 15:56:30 [Warning] Using unique option prefix key_buffer instead of key_buffer_size is deprecated and will be removed in a future release. Please use the full name instead.
    161203 15:56:30 [Note] /usr/sbin/mysqld (mysqld 5.5.53-0+deb8u1) starting as process 14916 ...
    161203 15:56:30 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead.
    161203 15:56:30 [Note] Plugin 'FEDERATED' is disabled.
    /usr/sbin/mysqld: Table 'mysql.plugin' doesn't exist
    161203 15:56:30 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
    161203 15:56:30 InnoDB: The InnoDB memory heap is disabled
    161203 15:56:30 InnoDB: Mutexes and rw_locks use GCC atomic builtins
    161203 15:56:30 InnoDB: Compressed tables use zlib 1.2.8
    161203 15:56:30 InnoDB: Using Linux native AIO
    161203 15:56:30 InnoDB: Initializing buffer pool, size = 128.0M
    161203 15:56:30 InnoDB: Completed initialization of buffer pool
    161203 15:56:30 InnoDB: highest supported file format is Barracuda.
    InnoDB: Log scan progressed past the checkpoint lsn 48941
    161203 15:56:30 InnoDB: Database was not shut down normally!
    InnoDB: Starting crash recovery.

    It's only being used for a Kodi database, which gets its share of use, but not all day and night type of stuff.


    I left that field blank, and the plugin is turne don so I figured I'd just try and restart the service via ssh.


    Here's the output of "service mysql start"


    Job for mysql.service failed. See 'systemctl status mysql.service' and 'journalctl -xn' for details.


    Output of: "systemctl status mysql.service"


    Code
    * mysql.service - LSB: Start and stop the mysql database server daemon
       Loaded: loaded (/etc/init.d/mysql)
       Active: failed (Result: exit-code) since Sat 2016-12-03 15:54:18 EST; 11s ago
      Process: 13439 ExecStart=/etc/init.d/mysql start (code=exited, status=1/FAILURE)
    Dec 03 15:54:18 OMV mysql[13439]: Starting MySQL database server: mysqld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . failed!
    Dec 03 15:54:18 OMV systemd[1]: mysql.service: control process exited, code=exited status=1
    Dec 03 15:54:18 OMV systemd[1]: Failed to start LSB: Start and stop the mysql database server daemon.
    Dec 03 15:54:18 OMV systemd[1]: Unit mysql.service entered failed state.

    Next thoughts? Total uninstall and reinstall? Pretty sure the first time I turned it on, even before I moved the path, it threw an error, so something else might be up here.

    I always thought I should store my databases on something other than the main OMV volume, especially since I am using an SSD. Should I just leave them where they are by default on the SSD OS drive? In that case would I still need to specify an alternate location or just leave that field blank? Because I think when I left it blank, I got an error as well (a different error, but an error all the same).

    Thanks for the push into 3.0, ryecoaaron.


    That fixed all my problems with the segmentation faults.


    Just troubleshooting a MySQL plugin issue now, but will start a thread for that in the proper place.


    3.0 looks nice, all the stuff that made 2.0 great, made even more efficient. And no segmentation faults is awesome too. That was driving me nuts at 2 AM last night! :)

    Checked the forum for similar threads but found no definitive solution, hoping I can get some help. This is on a fresh install just done today of OMV 3.0 beta.


    MySQL plugin can be turned on, but when Data Directory is set, and saved, errors happen.


    datadir = /media/3ed4d2b9-03c0-4b70-b850-82df1e1757f7/SQL


    I figured I'd pull out all stops and make mysql the user and group owner of the above folder with full permissions and created an internal share. Didn't change anything.


    GUI Interface Error:








    Output of: "systemctl status mysql.service"


    What obvious thing am I missing here?

    Is it possible that it's not the drive's fault even though I see the error during a write operation?


    Could it be CPU or Memory causing this to occur? Or an incorrect BIOS setting or unsupported hardware?


    I'm running a memtest next.

    I am seeing segmentation faults on a fresh install of OMV on a new SSD. Smart data is fine, green light and all tests good on SSD. Prior SSD had gone red so I thought a new one would solve it.


    So, things I have tried:


    - Immediately installed flashmemory plugin (first boot)
    - Backports Kernel
    - Changed SSD
    - Uninstalling and re-installing plugins
    - Clean fresh install of OMV
    - Changing SATA data cable to SSD


    I have now tried 3 different SSD drives and all 3 are throwing segmentation faults. What else could cause this and should I be checking? Memory? Something else? Is it possible I am using some sort of system that is incompatible with OMV and will throw segmentation faults?


    Log snippets:




    I'm stumped.

    Log Part 3:


    Log Part 2, Continued:



    Sorry for the cross-post, i realized I posted my initial info in the wrong forum.


    I will admit this is my first full size server and I may be in a bit over my head. I am using a Dell C2100 which has 12 bays and all kinds of hardware inside (backplane, cabling, etc) to make the 12 drives work. I bought this used so my fear is that I have some bad hardware in here (the drive seems to pass all tests).


    3 times now, while performing a copy operation to this drive, the system has frozen, the drive has errored out and come back offline. What worries me is that the log, while it looks mostly greek to me, may be telling me something about the hardware (backplane, etc) failing that can tell me which component is the culprit, but I just can't seem to make heads or tails of it.


    I have the full log at the exact spot it happens, from the moment things go "bad", and am hoping someone might see something in the messages here that spells out what I should be looking for.


    This admitted server newb would appreciate any insight at all, sincerely! If this points to a hardware failure and there are some components I should be checking, that would be a great help in a head start. This used server has a 30 day warranty so perhaps if there is a faulty component somewhere in the drive array I can get it replaced.


    LOG in 3 PARTS, Part 1:


    Additional issue on my other old OMV box (currently running 2, mirgrating old to new, lots of joy).


    One of my standalone non-pooled non-RAID 5TB ext4 drives went into read-only mode and I can't seem to get it remounted as read write. Am I missing the command or something? I just want to get this drive mounted so I can work on some files!


    OMV did a fsck on the drive when I rebooted, still mounted it read only yet again after the fsck and full reboot.


    Brand newish drive, 886 hours life so far, short self test reported no SMART errors.


    Help? The hard drive / server fairy hates me today.

    Error log continued (too long for first post):


    Twice in less than a day now, while performing an rsync on my OMV fresh formatted box (OMV on an SSD) I had a drive crash and go offline. The drive is a brand new 5TB Seagate (I know, not the greatest). The backplane supports large drives.


    Here is the error log, if anyone can make heads or tails of anything here to point me to the culprit:


    The buffer IO error message than repeated until I saw it, unmounted the drive, ran an fsck to repair the journal, and remounted the drive.


    It's as if OMV or my hardware suddenly just rejected the drive.


    I'm new to the server world but really just wanted a big JBOD box with a ton of standalone ext4 drives running OMV. Each drive works on its own. There are 3 other hard drives in system plus the SSD. Only the drive I was copying to (for over 12 hours each time) had this issue occur.


    I'm not using RAID but my system is a 12 bay Dell C2100 box with a backplane and all that. Was bought used so I am worried about hardware failures so I'm hoping there is some insight.

    My OMV box is going to run Sonarr, NZBGget, Transmission, and Couchpotato 24/7, and serve video throughout the house. Never more than 2 streams at once and usually just one.


    I have a good idea of the board I want to use, to maximize SATA ports, but should I be looking at an i5 processor instead of an i3 for running these apps?


    Also, is 16 GB of memory enough? Too much? Just don't want any bottlenecks.

    I'm building a new OMV box while my current one is running.


    Once the new box is running, since my current OMV installation has each driving running independent of the others, can I just gradually pull the drives from the old system and put them in the new system?


    It sounds easy, but for some reason I think this is difficult? I don't believe OMV can just important a full file system like that, and it can only import newly formatted drives?


    Any hints would be most appreciated. If this doesn't work, I will just need to clone my current OMV installation to a drive to be built for the new system and move the drives into it that way. Just thought this might be a good opportunity to start fresh.