Beiträge von Lucideye

    All that illegally hundreds of thousands of dollars in fines and jail time data was lost ;(

    Fuck you. I paid for every DVD and Blu-ray in my collection, and then spent hours ripping them to files for backup. I still have most of the original disks, in some cases, those are my only backups... which is why it's a big deal when a drive suddenly crashes for no apparent reason... I still have all the data and files, it just takes days or even weeks to recover it from optical disks. So go fuck yourself with your illegal and false accusations.

    After wasting my entire evening, and well into early this morning, I still couldn't figure out why OMV could not see the file-systems on my USB drives. I can plug those drives into my desktop, and luckily, the file-systems are still there, and all the files and directories still intact. But OMV is not even recognizing them for some reason.


    I finally decided to just get rid of OMV altogether and just run the latest release of Bullseye and simply install NFS server and manually setup the shares. I'm only sharing 2 drives, each with only one shared folder, so only 2 shared folders total.... and there are only 4 clients on this network segment that access those shares, and no need for any access outside this segment. And since I completely got rid of Samba on all of my machines and switched to NFS, I really have no more need for OMV at all. And NFS is easy to setup so that only IPs from this segment can have access. And I've also mapped the drives on my clients so that the drives simply appear already mounted at boot.
    It's really disappointing, this used to be a reliable piece of software, but after this and other recent serious crashes and data losses I've had with OMV, it's clear I can't trust my data to this software anymore, and I'm really sick of the gas-lighting and "pleasant attitude" of people like ryecoaaron, and sick of devs that are clearly not doing their due diligence in testing their software before releasing it for public use, and then hiding behind the EULA when their software causes serious damage to people's data.

    Your files aren't gone. You could always enable smb as a backup.

    So.... where is all my data?

    I just went through a whole fresh reinstall on a new card and now I am finally back into the WEB GUI and... NO FILES?!?!?!?
    I can see my drives....


    But the file-systems are GONE?!?!?!?!?! WTF?!?!?!


    There were file-systems on these drives BEFORE THE UPDATE!!!!


    WHERE ARE THEY NOW???????


    THAT'S 2 TERABYTE DRIVES WHICH WERE BOTH FULL OF ALL OF MY MOVIES, MUSIC, TV SHOWS, PICTURES, ETC.!!!!!


    I sure hope someone can tell me how to get all my data back!!!

    A RPi3 is slow.

    Not this slow. I've been using OMV on older Raspberry Pis for over 5 years, with no major problems... until recently.
    It has now been over 10 minutes since I have tried unselecting and reselecting the NFS versions and clicking the check on the Pending changes bar... and it just finally stopped and is now showing a "software error" blank screen and now I can't access the WEB GUI at all!!!! I can't even ssh into the backend of that pi now... total crash!!!


    Your files aren't gone. You could always enable smb as a backup.

    If I can not access them from any of my network clients, they are the same as "gone" to me. The purpose of a NAS is to be able to access files on the NAS from any machine on the network, which I can not do anymore.
    I purposely GOT RID of Samba because it was no longer working properly on my linux machines, and Samba is completely unnecessary on a linux-only network. I don't own any Windows machines, I only need NFS, So why would I want to have 2 different sharing protocols setup on all of my clients?????? That is seriously one of the worst answers I have heard yet in this forum.

    We do our best for the two people who volunteer their time. This happens to Windows all the time. I spent a few hours monday fixing the fallout from a bad Windows update.

    Sorry, but it really seems like "your best" is leaving a lot of people unexpectedly in pretty lousy situations. Maybe more due diligence in testing your updates before releasing them as "stable" instead of these ever increasing lazy-developer-syndrome excuses?


    The "fix" in the blog post you sent me is for OMV 6.5.0
    I am running 6.6.0-2, or did you not bother to even read my post thoroughly?


    I just tried that "fix", and now my OMV server will not even boot!!!!

    Just ran an update of installed packages and now my OMV NAS is completely broken!
    OMV 6.6.0-2 running on a Raspberry Pi 3.
    Prior to this latest update, it had been up and running fine for over 100 days, but now...


    The NFS service will not start... the dashboard is showing this service as "not running"

    (Yes, the service is enabled in the settings and the shares are setup correctly, it was ALL WORKING FINE until I ran the latest update).



    The yellow "pending configuration changes" banner across the top of the page will not go away when I click the check mark...


    ... It just sits there forever and then after a long time it finally spits out this really long error in a big red box.

    ```

    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color nfs 2>&1' with exit code '1': OMV:

    ----------

    ID: configure_default_nfs-common

    Function: file.managed

    Name: /etc/default/nfs-common

    Result: True

    Comment: File /etc/default/nfs-common is in the correct state

    Started: 17:14:43.091877

    Duration: 1106.187 ms

    Changes:

    ----------

    ID: configure_default_nfs-kernel-server

    Function: file.managed

    Name: /etc/default/nfs-kernel-server

    Result: True

    Comment: File /etc/default/nfs-kernel-server is in the correct state

    Started: 17:14:44.198874

    Duration: 1036.782 ms

    Changes:

    ----------

    ID: configure_nfsd_exports

    Function: file.managed

    Name: /etc/exports

    Result: True

    Comment: File /etc/exports is in the correct state

    Started: 17:14:45.236641

    Duration: 1137.721 ms

    Changes:

    ----------

    ID: start_rpc_statd_service

    Function: service.running

    Name: rpc-statd

    Result: True

    Comment: The service rpc-statd is already running

    Started: 17:14:53.444500

    Duration: 220.607 ms

    Changes:

    ----------

    ID: start_nfs_server_service

    Function: service.running

    Name: nfs-server

    Result: False

    Comment: A dependency job for nfs-server.service failed. See 'journalctl -xe' for details.

    Started: 17:14:53.670358

    Duration: 431.158 ms

    Changes:


    Summary for OMV

    ------------

    Succeeded: 4

    Failed: 1

    ------------

    Total states run: 5

    Total run time: 3.932 s

    [ERROR ] Command '/bin/systemd-run' failed with return code: 1

    [ERROR ] stderr: Running scope as unit: run-rd3d38abdf47b4c139d9af49175b6551a.scope

    A dependency job for nfs-server.service failed. See 'journalctl -xe' for details.

    [ERROR ] retcode: 1

    [ERROR ] A dependency job for nfs-server.service failed. See 'journalctl -xe' for details.


    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color nfs 2>&1' with exit code '1': OMV:

    ----------

    ID: configure_default_nfs-common

    Function: file.managed

    Name: /etc/default/nfs-common

    Result: True

    Comment: File /etc/default/nfs-common is in the correct state

    Started: 17:14:43.091877

    Duration: 1106.187 ms

    Changes:

    ----------

    ID: configure_default_nfs-kernel-server

    Function: file.managed

    Name: /etc/default/nfs-kernel-server

    Result: True

    Comment: File /etc/default/nfs-kernel-server is in the correct state

    Started: 17:14:44.198874

    Duration: 1036.782 ms

    Changes:

    ----------

    ID: configure_nfsd_exports

    Function: file.managed

    Name: /etc/exports

    Result: True

    Comment: File /etc/exports is in the correct state

    Started: 17:14:45.236641

    Duration: 1137.721 ms

    Changes:

    ----------

    ID: start_rpc_statd_service

    Function: service.running

    Name: rpc-statd

    Result: True

    Comment: The service rpc-statd is already running

    Started: 17:14:53.444500

    Duration: 220.607 ms

    Changes:

    ----------

    ID: start_nfs_server_service

    Function: service.running

    Name: nfs-server

    Result: False

    Comment: A dependency job for nfs-server.service failed. See 'journalctl -xe' for details.

    Started: 17:14:53.670358

    Duration: 431.158 ms

    Changes:


    Summary for OMV

    ------------

    Succeeded: 4

    Failed: 1

    ------------

    Total states run: 5

    Total run time: 3.932 s

    [ERROR ] Command '/bin/systemd-run' failed with return code: 1

    [ERROR ] stderr: Running scope as unit: run-rd3d38abdf47b4c139d9af49175b6551a.scope

    A dependency job for nfs-server.service failed. See 'journalctl -xe' for details.

    [ERROR ] retcode: 1

    [ERROR ] A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. in /usr/share/php/openmediavault/system/process.inc:242

    Stack trace:

    #0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(62): OMV\System\Process->execute()

    #1 /usr/share/openmediavault/engined/rpc/config.inc(174): OMV\Engine\Module\ServiceAbstract->deploy()

    #2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array)

    #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)

    #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)

    #5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(620): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusV6...', '/tmp/bgoutputsp...')

    #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))

    #7 /usr/share/openmediavault/engined/rpc/config.inc(195): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)

    #8 [internal function]: Engined\Rpc\Config->applyChangesBg(Array, Array)

    #9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)

    #10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)

    #11 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)

    #12 {main}

    ```

    Really doesn't inspire any confidence in your software that a simple, routine package upgrade can render the file server completely unusable.


    What I have tried to fix this:


    I tried rebooting the OMV Pi, no fix, all problems persist.


    I have tried removing duplicate & unused UUID references from

    /etc/openmediavault/config.xml

    and

    /etc/monit/conf.d/openmediavault-filesystem.conf

    and

    removing any unused folders from /srv

    and then rebooting... no fix, all problems persist.




    I need to fix this asap. How do I get access to my files again?!?!?!?

    I know it is a unfortunate situation, but what you say makes no sense at all. Every try to read from the drive results in I/o errors but you know it is not dead, because it was working fine before and makes no clicking noises? Bro, every drive was working fine before it died. Clicking noises may be an indicator the same way as bad SMART values are.


    What you can do is use dd with -noerror flag to force clone the whole drive to a new one while ignoring read errors. That is because every touching of the bad drive may result in even more data damage. You can now try to repair the broken filesystem and read data from the new drive without any risk.

    I do not need your insults and attitude. If you can't offer help without insults and attitude then FUCK OFF
    I thought I made it clear that I don't have any other drives I can copy a full image to. I have one other available drive, and it is the same exact size as the drive that I am having trouble with, so none of the available rescue tools can copy an image because they tell me the target drive is too small. So If you really want to help, maybe you could send me a larger drive instead of your insults and attitude.

    I just upgraded to OMV5

    There is no OMV 6 install instructions or script for the Raspberry Pi apparently?

    If there is, can you please point me to a link?

    Will OMV6 even run on a 32bit R-Pi 2?

    I don't have any spare Pi's that are newer right now, and I can not afford to buy a Pi with the current price gouging that is going on, nor can I really afford to buy a 4-6 TB hard drive to do a full backup of the remaining drives on the server.

    Unfortunately I do not have a complete backup of everything that was on the failed drive.

    I probably only have about 70% of what was on there backed up, and the backups are on several DVD roms.

    I do not currently have a DVD drive, so I really need to get that data off that hard drive somehow.

    I know the drive is not "dead".

    It was working fine up until it wasn't, and I've never had a drive outright fail completely where most of the data couldn't be retrieved.

    It still spins up and can be read by testdisk and other diagnostic tools as I mentioned.

    The drive does not make any bad noises or clicking when it is powered on.

    It has simply lost it's primary partition record for some reason and I'm sure I could probably use one of the many backup partition records that testdisk says are still valid, but since I can not seem to copy the drive to another good drive, I am weary of writing anything to the bad drive.... hence why I asked if there was anyone with some experience in recovering data from a drive in this condition.

    So my original question still stands.
    Is there anyone who can please help or at least offer some instruction so I don't do any further damage to the drive?
    Thank you.

    Hello,


    I have been using OMV v3 on a Raspberry Pi 2 for almost 5 years now with no problems.

    I have 2 USB drive enclosures, each with 2 1TB laptop drives.... so 4 drives total with 4TB total storage space.

    The USB enclosures each have their own dedicated PSUs and the Pi has it's own dedicated PSU as well.

    All of the PSUs are plugged into a UPS.

    As far as I can remember, the drives were formatted as GPT EXT4 partitions, no LVM.

    This server has been up and running 24/7 for about 5 years, it's only been rebooted a few times for updates and to change the battery on the UPS once.

    It has never suffered an "unscheduled power down", all reboots and power downs were controlled and intentional.

    All it really does is store media files... music and movies that I play from a couple of other Pis running KODI.

    I have the maximum power-saving settings enabled on all of the drives, so none of the drives are spinning 24/7, they spin down and sleep after 10 minutes of non-use.

    I also have SMART monitoring enabled, and none of the drives have ever shown any warnings, they are listed as being healthy.

    Each drive has only one or 2 shared folders.


    This past week, I noticed I could no longer access 2 of the shared folders, these folders turned out to be on the same physical drive.

    I tried rebooting the server and drives, but the folders on that one drive were still not accessible from my clients, everything else seemed OK though.

    SMART monitoring did not indicate any problems with the drive.

    I decided to power down the server and pull the drive from the USB enclosure and check it on my Ubuntu desktop using a different USB drive docking station.

    The Ubuntu Disks utility sees the drive, but instead of showing the EXT4 partition and my files, the drive shows up as "Partition 1, 1.0 TB Unknown".


    I do not understand how this could have happened, especially when the SMART scan says the drive is OK.

    Unfortunately, this particular drive contains the backup of all my photography, business files, and my entire music collection, so I desperately need to recover the files if possible.


    I have tried various recommended analysis methods, fsck, gdisk, testdisk, etc.

    Testdisk recognizes the disk as an EFI GPT partition.

    However, I really don't want to proceed any further without some help and advice.

    Unfortunately I am also not able to copy the bad drive to another drive for testing purposes.

    Any attempt to copy the drive results in I/O errors from the bad drive.

    Yet when I run sudo smartctl -a /dev/sdc it shows no glaring problems with the drive.

    I desperately need to recover this drive intact if possible.
    Any advise would be most appreciated, preferably from someone who has experience and really knows what they are doing.

    I can't believe that the drive is damaged beyond recovery, it has just somehow lost it's partition information.

    Please help :(