Beiträge von MikeSmith

    So while researching the Files | Add from Example I stumbled onto this. It looks like it was implemented late in OMV 6 but fully in OMV 7?


    Or maybe I'm missing something?


    The real question I had was should I be using the add from example to install Nextcloud or something else? There is a entry for it... so I think that is the "proper" way but not sure.


    Any light on this feature?


    Where can I go to read about Add from example in files, docker compose?


    I'm obviously missing something basic, so please speak slowly.

    So recently installed OMV 7 and just got docker working. I'd like to get Nextcloud working. I had it in OMV 5 but a lot has changed in OMV 7.


    I noticed in OMV | Services | Compose | Files, when I click on Add from example there is an option for:

    nextcloud - Nextcloud gives you access to all your files wherever you are.


    Should I be installing it that way? I've seen posts about NextCloud AIO should I be following them?


    As of OMV 7 what is the "correct" / recommended way to get NextCloud working?


    Thanks

    Ok so just setting up an OMV7 system.


    I have one hard drive in an Odroid HC2.


    It's a Western Digital Red 6TB hard drive connected with Sata.


    I've seen:

    Spin Down or Not / Or use Advanced Power Managment 2017

    hd-idle on sourceforge

    Correct HDD settings for ensure long life 2020

    HDD Power Management 2017


    I've also looked here:

    Powermanagement for todays hard drives (WD Red Pro)? 2022

    Playing with disk settings


    So in 2024 what are the correct setting to use for:

    Advanced Power Management

    Advanced Acoustic Management

    Spindown Time

    Enable write-cache


    In TrueNAS they are saying to use camcontrol identify <drive ID> to see if the hard drives plugged in have advanced power management and automatic acoustic management but I don't think OMV has that?


    If it does what is it called?


    Is there a guide I'm missing for setting this up correctly? If so can someone point me towards that?


    I do realize this is often brought up, but I have looked and am having problems finding a acceptable solution.


    Thanks for any help.

    So I ran a few tests:


    - I ran the system with only a disk in Bay1 and it's stable.

    - I ran the system with only a disk in Bay2 and it's stable.

    - I ran the system with only a disk in Bay3 and it's stable.

    - I ran the system with a disk in Bay1 and Bay 2 and it's stable.

    - I ran the system with a disk in Bay1 and Bay 3 and it's stable.

    - I ran the system with a disk in Bay2 and Bay 3 and it's stable.


    Going to now test all three disks at once.

    This time around after a reboot:

    Storage | Disks: All 3 visible but the lettering starts at sdb, sdc, sdd


    Storage | File Systems:

    sdb: is missing

    sdc: Available and used is blank. It's not mounted but is referenced. Status online

    sdd: have available space, is mounted, and referenced. Status online


    I don't know what's going on :/


    Not sure where to go or what to do at this point.


    Anyone have a clue?

    So that didn't work either.

    Again the 3 drives are unmounted after less than 24 hours.


    I've seen different variations. One variation is in Storage | File Systems: the status is online and referenced but not mounted and available space is blank.


    Another variation in Storage | File Systems: the status is missing and everything is blank.


    Another variation Storage | Disks: are missing


    I'm not sure what is going on. Perhaps OMV is just in a bad state and I should reinstall OMV?


    Not sure where to go from here.


    In all cases a reboot "restores" the drives.

    So today I looked at the NAS and all three drives are Missing in the Storage | File Systems. A reboot will restore them.


    I'll try:

    128 - Minimum power usage without standby (no spindown)


    Now to see if that will fix it. Everything else is still set to Disabled.


    I'm wondering if a fresh install of OMV is in order?

    Maybe something related to the energy states of the disks? If you have it activated, you can try deactivating the sleep state to rule out possible problems in this regard.

    Ok I went into Storage | Disks and set all three drives to:


    Advanced Power Management:

    64 - Intermediate power usage with standby


    Advanced Acoustic Management:

    Disabled


    Spindown time

    Disabled


    I guess I'll see how this works out and report back. Shouldn't take too long.

    I'll preface this post with the fact that I honestly don't know what to even search for, though I did try to search for my issue on both google an this forum.

    Looked for "File System mount unstable"


    Oh I saw a lot of mention of USB, and I am NOT using USB to connect any hard drive. It's all sata.


    I did look at the: Solutions to common problems sticky and didn't see this.


    Background:

    I have 3 drives in my 4 bay NAS system. Hardware wise it's a Odroid H3 NAS with a 2 port sata card on the m.2 port. I've had it running great for over a year now no issues.


    I had a 4 drive system with two raid 1 pairs, but a few weeks ago I had raid issues and decided to remove raid altogether.


    So currently have a 3 drive EXT4 system nothing raided .


    The OMV6 system is fairly vanilla no plugins installed besides Remote Mount.

    Have rsync tasks running but not really running an rsync server. It acts like a client all the time.

    Also have some SMB/CIFS shares


    Issue:

    So 3 EXT4 drives: sda1,sdb1,sdc1.

    In File Systems I can see them.

    I can see how much space is available, used, and mounted and referenced are checked off and status is online.

    If I leave everything alone after a "time" I can go back to the OMV gui and go to File Systems and all 3 drives will no longer be mounted.

    Available, used, and mounted will be blank or not checked off. Referenced will still be checked and online is still good.

    SMART shows all status as good

    But for whatever reason the drives disappear and are no longer mounted.

    I can go to the cli and low and behold they are not mounted.


    If I reboot the system everything goes back to normal... for a "time".


    Does anyone know what is going on and how to solve this? Debating doing a fresh install but not sure.

    I've deleted all the shares and deleted any references to the hard drives:


    Services | SMB/CIFS | there are no Shares.

    Services | Rsync | there are no Tasks.

    Services | Reset Permissions | there is nothing listed

    Storage | Shared Folders | nothing shared


    When I go to:

    Storage | Software RAID |

    And select the volume I can delete the volume. When I hit the delete button this error is thrown:


    After a time the volume returns sometime with different /dev/sdX letters.


    At that point if I try to delete that volume the delete button is grayed out.


    After a shutdown or reboot the volume returns so obviously it was never deleted.


    Not sure where to go from here.

    So not sure if I need a new post but I've deleted all the shares and deleted any references to the hard drives but I still can not delete the RAID.


    So:

    Services | SMB/CIFS | there are no Shares.


    Services | Rsync | there are no Tasks.


    Services | Reset Permissions | there is nothing listed


    Storage | Shared Folders | nothing shared


    Storage | Software RAID | select /dev/md0 and still can't delete


    Not sure where to go from here.


    I did manage to delete the other clean, degraded mirror. The clean md0 RAID I can not delete

    Yes that is the absolute path, but in order to remove the share folder don't you need to first remove any references to the shared folder? How can you easily tell what references the shared folder?


    Zitat

    You will loose the data on the drives.

    Oh that's strange, why would you lose the data on a RAID 1 drive? I'm just removing the RAID1 from the device, so does removing the RAID format the drives? and if so why? I was under the impression that RAID 1 was just a mirror and I could take any of those two drives plug it into virtually any enclosure and retrieve the data?

    Hello everyone, I currently have a two independent RAID 1 pairs with two disks each. Last week I get notified that both sets of disks are in the "clean, degraded" state.


    Searched the forum and found these posts:I also found this:And this:
    Vix


    After some thoughts I came to the decision to stop using RAID.

    I found this here: https://kbase.io/delete-a-raid-volume-in-openmediavault/ that points to the previous OMV forum post. That post was made in 2016 and figured it would be good to ask if anything has changed in the last 8ish years? So is that procedure in the kbase still apply to OMV6? I'm simply trying to delete the RAID.


    All my data is backed up but it would be nice not to lose the disk when I delete the RAID 1, does anyone know if after following the procedure if I'll lose all the data in the mirror or will I basically have two independent copies?


    Also I know one of the issues with deleting the RAID is you can't do it if there are shared folders under the RAID volume, but is there any way to see exactly what folders are shared under the RAID?


    Thanks for all the help.