My NAS.

  • Hello, I've been using OMV for some time now. I aimed to create a working storage system that I can access at home from any device. Later I decided to add Plex to my system and there my problems started. After adding a few more hard drives and creating a multi-device system, I installed Plex. It worked for a few days until the first restart. After some time looking at what went wrong, I left it as it was because I did not have much time at the time. But now I have tried to rebuild a system. I thought my problem was a motherboard. But, after building a new system, the problems came back almost the same. I'm not a Linux guy and it is hard for me to look for things that might be the problem. So I thought that this time I'll try to explain my problem in a forum.

    I've been googling for a solution to my problem. Here is what I found and the issues with it. After the build, I installed Portainer and then Plex. The system worked fine until I turned it off and put it on a shelf where it belongs to sit. One of my file systems went missing. I looked for the cause. I found many error messages about the "movit" service. After some digging, I found commands that fixed the errors. But now, a HDD is missing from a system and two file systems are gone. I do not want to mess with the system without knowing what I'm doing, maybe you can help me start somewhere.

    Error: 2025-01-23T12:14:30+0200 OMV monit[1007]: 'mountpoint_srv_dev-disk-by-uuid-edb40772-02f5-41fa-b52e-731b27911ea1' status failed (32) -- /srv/dev-disk-by-uuid-edb40772-02f5-41fa-b52e-731b27911ea1 is not a mountpoint

    Commands that helped to get rid of those errors:
    rm /etc/monit/conf.d/openmediavault-filesystem.conf
    service monit restart
    Thread: monit errors

    • Official Post

    There could be many causes, it is difficult to say without more details. But the first thing that comes to mind after reading your post is that some cable simply moved during transportation and that may have caused a poor connection of one or more hard drives.

    On the other hand, I suggest you not use Portainer, the openmediavault-compose plugin does the same job in a more efficient way and provides additional functionalities. You can read here how to configure and use it. https://wiki.omv-extras.org/doku.php?id=omv7:docker_in_omv

  • Maybe there is a way to provide more details for you? All cables is sitting properly. NAS was working good in a new place where it should sit all the time, but after reboot or update it started acting strange.

    • Official Post

    Commands that helped to get rid of those errors:
    rm /etc/monit/conf.d/openmediavault-filesystem.conf

    The problem is still there. You just don't get any error messages in the logs anymore.

    The purpose of monit is to monitor the system and give alerts and/or take actions when something is not working as expected.

    In the linked thread a backup file with the ending .bak was deleted as it did not belong there.


    What kind of system are you using? How are the drives connected? Do they build a RAID?

  • So now it's a new setup because I thought my previous motherboard was faulty. I bought a new one with 8 SATA ports since my old motherboard didn’t have enough. To work around that, I had installed a PCI SATA card to expand the number of available ports, but I suspected it might have been causing problems. The new motherboard is an ASUS P8Z68 Deluxe, running with an Intel i7 3770k CPU. For installation, I mount a GPU of some kind just for setup purposes.


    The system has three pairs of HDDs configured as RAID. Each pair consists of identical HDDs—same brand, model, and capacity. They are all set up as Mirror RAID (RAID 1) for redundancy. The PSU is 300W, which should be enough for my setup.


    Originally, I wanted a storage system for my home. I was tired of constantly plugging in flash drives and external HDDs to transfer files between devices. I also run a small business from home with my wife, where quick access to data is essential. I handle machines like lasers and printers, while she works on drawings and designs for production. I needed a storage solution that I could manage myself, rather than relying on cloud storage. A home-built NAS system seemed to cover all my needs.


    Later, I discovered Plex, which I thought would be a great addition. However, during my first setup, everything worked fine until I connected a second pair of HDDs for RAID. After that, everything ran smoothly—until the first restart. Due to time constraints, I left the system as it was. My main storage RAID was fine, so I sacrificed my home media center to keep my data storage stable.


    As I wanted to get things right, I decided to buy a new motherboard and set up everything again. But after just a few days of use, issues started appearing. I suspect that I may be doing something wrong during the setup stage.


    I’m primarily a Windows user, and my experience with RAID setups is limited. Additionally, I installed the HDDs in stages, which might be causing a problem that I don’t fully understand.


    This is my NAS story. Now, I’m unsure whether I should reinstall everything from scratch or try to troubleshoot the issues.

    • Official Post

    But after just a few days of use, issues started appearing

    What are the problems? Without more details it is difficult to help.

  • Currently, my system has three pairs of hard drives:


    One 1TB pair for my work-related data storage.

    One 4TB pair, which serves as a general storage solution for both my personal data and Plex media center.

    One 500GB pair, which is used for temporary files before sorting and transferring them to the other two storage arrays.

    How I Set Everything Up:

    First, I downloaded the latest OMV installation file and installed it. Then, I connected the HDDs in pairs, one set at a time.


    I inserted two hard drives that would be configured as RAID, initialized them, set up RAID, and formatted them as a file system.

    After that, I powered off the NAS and repeated the process for the next two pairs of HDDs.

    I now realize that this might have been a mistake—I probably should have installed all HDDs at once, set up the system, and ensured that all disks were recognized from the very beginning of a fresh installation.


    After setting up the storage, I installed OMV-Extras, then Docker, then Portainer, and finally Plex. I followed this specific installation sequence based on the information I gathered from a few YouTube tutorials:


    1. YouTube Video 12. YouTube Video 2


    The Issue:

    After the installation, the NAS was working perfectly—both the storage and the Plex server ran smoothly. However, after a few days (without rebooting the NAS), the 1TB storage suddenly became inaccessible. When I restarted the NAS, both hard drives from that array were no longer recognized by the system.


    I checked the log files and found many similar errors reported by Monit:

    Error: 2025-01-23T12:14:30+0200 OMV monit[1007]: 'mountpoint_srv_dev-disk-by-uuid-edb40772-02f5-41fa-b52e-731b27911ea1' status failed (32) -- /srv/dev-disk-by-uuid-edb40772-02f5-41fa-b52e-731b27911ea1 is not a mountpoint.


    As I understand it, I can no longer see these errors because I mistakenly deleted something that I shouldn’t have.


    I apologize for not providing all the necessary details upfront. If there's anything specific I should check or clarify, please let me know. Right now, I feel like I don’t know what I don’t know, and I’m not sure where to start troubleshooting. That’s why I decided to post here—to at least get started somewhere.

    • Official Post

    I now realize that this might have been a mistake—I probably should have installed all HDDs at once, set up the system, and ensured that all disks were recognized from the very beginning of a fresh installation.

    That shouldn't be a problem.

    After setting up the storage, I installed OMV-Extras, then Docker, then Portainer, and finally Plex. I followed this specific installation sequence based on the information I gathered from a few YouTube tutorials:

    My recommendation is that you stop watching YouTube videos. It may be convenient as a first contact to familiarize yourself with the system but nothing more. In general, these videos are not updated and quickly become obsolete, leading to errors. Instead I recommend that you use the openmediavault documentation, omv-extras and the forum guides.

    Also, seeing who the author of one of the videos is, I wouldn't be surprised if you did something atrocious like using user 998 or something similar.

    Furthermore, I would recommend not using Portainer if you do not have some compelling reason to do so.

    Error: 2025-01-23T12:14:30+0200 OMV monit[1007]: 'mountpoint_srv_dev-disk-by-uuid-edb40772-02f5-41fa-b52e-731b27911ea1' status failed (32) -- /srv/dev-disk-by-uuid-edb40772-02f5-41fa-b52e-731b27911ea1 is not a mountpoint.

    I'm not sure where you get this error but the first thing that comes to mind is to check the status of those hard drives with a short SMART test.

  • I only relied on YouTube because I needed a starting point to get hands-on experience with setting up a NAS—I didn’t expect to run into such issues. However, it seems that even the newest videos I found were already too outdated.


    At this point, I see the best course of action is to redo everything from scratch using the official OMV documentation.


    As for the hard drives, I can’t really check them because the system no longer mounts them, and they don’t even appear in the "Disks" section.


    Would it help if I booted the NAS with a monitor connected and gathered the information displayed during startup? I recall seeing a few errors there from the very beginning after installing the system. At the time, I googled them, and they didn’t seem critical—but maybe they are relevant after all..?

    • Official Post

    Would it help if I booted the NAS with a monitor connected and gathered the information displayed during startup? I recall seeing a few errors there from the very beginning after installing the system. At the time, I googled them, and they didn’t seem critical—but maybe they are relevant after all..?

    Yes, publish them, anything can be useful. You'll have to take a photo I guess, try to make it legible.

  • If the system boots and you have access (be it local or remote), you can try dmesg | grep err OR fail to see if it shows something.


    Or check the boot log or other's on the GUI to see if any error OR fail shows up (maybe it's better you download it to your PC and check it via NOTEPAD++ so you can use the "search" function:

  • Okay, I took some photos and screenshots from the CMD window. There are some errors, but I don’t understand them. I need help figuring out what they mean and what to do next.


    Also, I want to mention that I tested all the hard drives on another computer using HDD Regenerator before installing them into the NAS, and they didn’t show any serious issues.


    I also want to thank you for the help you've already provided—I really appreciate it!

    • Official Post

    They seem like bio errors but I couldn't say more. Maybe someone else can help with this.

    Try searching for information on the internet about those errors. ChatGPT could also give you answers.

  • I can’t believe I was so sure that my cables were fine, only to discover later that they were the culprit. Looking back, I realize that the problem started after I installed a fan to cool the hard drives (they were reaching ~70°C, which is not good for them). The fan installation caused some loose connections, which led to the whole problem. But, due to my previous experience with the first NAS system I built, I didn’t connect the dots correctly. As Chente mentioned earlier, it’s easy to overlook the most common issues with so many possible causes. I regret not following that advice right from the start.

    Step 1: Cable Check

    The first thing I did was check all the cables. I found that one SATA cable was disconnected, which caused one of the RAID arrays to fail, and the power cable had a faulty wire that wasn’t properly connected, causing another RAID array to disappear from the system. This issue wasn’t obvious until I physically took the cables apart and checked them closely.

    Tip: I should’ve started by checking the BIOS first, as the drives causing issues weren’t even showing as plugged in. This narrowed down the problem significantly and helped me pinpoint where to look.

    Step 2: Checking RAID and Disk Visibility

    After fixing the cables, I checked the system. The drives that weren’t showing up before started appearing, and the RAID arrays became visible. This was crucial because two out of the three RAID arrays weren’t visible earlier. Once they appeared, I could narrow down the issue. However, one RAID array showed up as "online," but it wasn’t displaying any disk space availability. This pointed me to a specific RAID array that was corrupted, and from there, I focused on the underlying filesystem issue causing the problem.

    Step 3: Filesystem Repair and RAID Recovery

    At this point, I proceeded with unmounting the filesystem. The command I used was:

    sudo umount /dev/md1

    Then, I ran a check on the RAID array:

    sudo fsck -f -y /dev/md1

    The -y flag automatically answered 'yes' to all the system prompts, fixing errors and inconsistencies with the filesystem journal, disk, and quota information, which would have been too tedious to do manually.

    Once the journal was repaired and the filesystem fixed, I was able to successfully remount the RAID array:

    sudo mount /dev/md1

    Everything was back to normal!

    Step 4: System Check and Monitoring

    After everything was repaired, I ran some checks to ensure the disks were in good health. I used the following commands to monitor the SMART status of each drive:

    sudo smartctl -a /dev/sdf

    sudo smartctl -a /dev/sdg

    Both drives showed no significant issues, and the SMART status indicated that the drives were in good health.

    Conclusion

    In the end, everything was back to normal. I was able to mount the RAID array, fix the corrupted filesystem, and the drives are working without issues. It’s always a good idea to check the basics, like cables and BIOS, before diving into more complex diagnostics. This experience taught me that sometimes the simplest problems, like a loose cable, can cause the biggest headaches.

    I’m leaving this comment here for future users so they don’t make the same mistakes I did. It’s easy to overlook the basics, but as I’ve learned, it’s crucial to start with the simplest checks. I also want to thank everyone who participated and helped along the way. Your input made all the difference!

  • chente

    Added the Label resolved
  • I had posted initially in another thread, but I had a very similar issue (3 HDDs were exhibiting issues that could be seen in dmesg). The errors I was seeing were like below. Out of the 3, only 2 are actually used and I had issues with the shares coming from the 2 HDDs with problems.



    My issue was caused by a SATA power extension cable that the 3 HDDs were connected to. I am using a big case and the cables from the power supply are not long enough to reach all HDDs. My solution came from another forum where someone has posted the fix, fortunately. This may help other people. The only funny thing is that my NAS may have had the issue for a long time, but a kernel update actually revealed it. The same thing happened to the poster in the other forum (issue was visible after a kernel update).

    Riddle me this, riddle me that
    Who is afraid of the big, black bat?
    I write on a blog (Romanian mostly)
    Using latest OMV 7.x (HURRAY) on an Intel i5820K NAS (currently with proxmox kernel 6.11 with Nvidia GPU acceleration enabled)

  • Thanks for sharing!

    When I built my system, everything was freshly assembled, and I personally plugged in every cable, making sure they were properly connected. But one small mistake in the pursuit of better hardware cooling ended up giving me a major headache...

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!