Posts by Flachzange

    I had the exact same behaviour using openmediavault_3.0.86-amd64.iso standalone installation from USB to USB. I had also another issue related to service management. Don't know what went wrong there. It's hard to imagine that an addon is responsible for that.


    As re-installing is not an actual solution someone with the same problem could do two things:


    1) Edit /usr/sbin/policy-rc.d and change value "101" to "0".
    OR
    2) Delete /usr/sbin/policy-rc.d

    I have a weird issue and already spent some time googling but without any success.


    I did a fresh install with 3.0.86 after my USB pen drive suddenly stopped working. I was running 3.0.x since Feb 2016. Although the previous system evolved over time and lots of beta versions, it did its job.


    The fresh install is even running better except one issue: The system will only resume after standby if a screen/display is connected via HDMI. As this functionality is crucial I must find a solution to it. I don't have a screen where the NAS is being placed.



    This is what happens


    Alternative 1 (No screen connected)
    1) System is running / no screen is connected via HDMI
    2) issuing pm-suspend
    3) System is in standby (S3)
    4) waking up system using WOL or power button
    5) In 90% of the tests, the system hangs during wakeup (PINGs are replied but nothing else works, not reaction on ssh either). In 10% of the tests the system resumes as expected
    6) Once HDMI with display is plugged in, system resumes immediately as expected


    Alternative 2 (Screen connected)
    1) System is running / screen is connected via HDMI
    2) issuing pm-suspend
    3) System is in standby (S3)
    4) waking up system using WOL or power button
    5) system resumes immediately in 100% of the tests



    So I had look into the logs which confirm my observations:


    https://pastebin.com/ZinC5rrP



    You can clearly see that some resume operations work fine, e.g. network and disks until a certain point in time (line 1-44). This is the action taking place before the HDMI cable and screen is connected. The system then hangs until you plug-in the cable. Once cable is plugged in, it becomes clearer what caused the issue. The call traces refer to the intel graphics driver.


    However, my experience ends at this point as I don't know what exactly in the driver causes the problem or if it is another problem and the driver is just revealing it. I already updated the intel-driver from the backports repository but that did not change the situation.


    Any help is much appreciated.


    Thanks.

    Same issue hier. Large BTRFS filesystem on mdam RAID with LVM ended up in timeout / communication error of web gui. What helped in my case is the installation of LVM plugin. That seemed to be required anyway as I could not create any samba shares based on a manual mountpoint.

    @Max76


    Did you solve the problem long term? I have encountered the same problem since a couple of month and jdownloader is practically not usable on my omv machine. Among reinstalling the plugin I tried a few things, but eventually jdownloader itself crashes and does not come up again. As a result only the backup jar file exists.


    I am also under the impression that self update does not work if the backup jar file is copied afterwards.


    OMV 3.0.47 with jdownloader plugin 3.2.1 on Debian with 4.7 backport kernel

    Solution?
    My last idea (except using another board) would be giving the partitions /dev/sdx1/ instead of just the whole drive /dev/sdx during array creation.
    Does this write the superblock to the partition itself instead to the MBR (which seems to get wiped) of the disk?? And even more important: will this destroy my existing data on the drives?


    I can confirm the above solution.


    I had a mdadm RAID5 over four disks created on /dev/sd[abcd] instead of /dev/sd[abcd]1. This worked fine on my Jetway NF9G. However, moving the RAID to a brand new Asrock N3700 resulted in deleted superblock information. Luckily, I could reassemble knowing the exact order of drives. So no data was lost in the end. Then I degraded the RAID and created a new one using partitions. This is recommended anyway. Several days later a RAID5 is now working properly on the Asrock N3700.