Beiträge von BackWoodsTech

    I've been reading through numerous windows client trouble shooting but haven't been able to resolve this.


    A few days ago I upgraded from OMV5 to OMV6 (as in fresh install of OMV6, it's an armbian variant if that matters).


    Client is Windows 11 and connected to company domain, although I work remotely.


    Prior to the upgrade, I was able to access my shares just fine.


    When I attempt to access the shares now, i'm prompted for credentials to my OMV device with the warning: The system cannot contact a domain controller to service the authentication request.


    I've tried provided the user account creds created during initial debian install to no avail.


    • I've tried adding the account to the windows credential manager as well.
    • I can access the OMV web gui just fine,
    • the user account can connect to the device via PuTTY just fine,
    • I can ping OMV from the windows client just fine as well and the IP resolves as expected.
    • The user account is a member of the sambashare group.


    Two potential clues:

    from windows client if I

    net view \\omvhostname 


    System error 5 has occurred.

    Access is denied.



    And secondly, if I connect to my company domain via our VPN I can suddenly access my SMB shares!? If i then disconnect from the VPN, I am able to use the shares for some time after that until the behavior is observed once again. During this time i can also perform the net view command as well.


    I'm at a loss now and I would be so very appreciative for any insight you may have, thanks!!


    Right I see what you mean, I didn't realize they were listed under /srv (i'm an idiot). Curiously, if I navigate there I'm only presented with the array, not the drives. Trying to access the array give i/o errors.


    mofo@Muninn:/$ sudo mount -a

    [sudo] password for mofo:

    mofo@Muninn:/$ ls

    bin dev export lib media opt root sbin sharedfolders svr tmp var

    boot etc home lost+found mnt proc run selinux srv sys usr

    mofo@Muninn:/$ cd srv

    mofo@Muninn:/srv$ ls

    dev-disk-by-label-LookingGlass dev-disk-by-label-Storage ftp pillar salt

    mofo@Muninn:/srv$ cd dev-disk-by-label-Storage

    mofo@Muninn:/srv/dev-disk-by-label-Storage$ ls

    ls: cannot access 'lost+found': Input/output error

    ls: cannot access 'Music': Input/output error

    ls: cannot access 'Users': Input/output error

    ls: cannot access 'plexmediaserver': Input/output error

    ls: cannot access 'Pics': Input/output error

    Music Pics Users aquota.group aquota.user lost+found music pics plexmediaserver vids

    mofo@Muninn:/srv/dev-disk-by-label-Storage$ cd pics

    mofo@Muninn:/srv/dev-disk-by-label-Storage/pics$ ls

    ls: reading directory '.': Input/output error

    mofo@Muninn:/srv/dev-disk-by-label-Storage/pics$


    is there any reason I cannot/should not unmount the array?

    mofo@Muninn:/mnt/temp$ sudo mount /dev/sdd /mnt/temp

    mount: /mnt/temp: unknown filesystem type 'linux_raid_member'.

    mofo@Muninn:/mnt/temp$ sudo mount -a /dev/sdd /mnt/temp

    mofo@Muninn:/mnt/temp$ ls

    mofo@Muninn:/mnt/temp$ sudo umount /mnt/temp

    umount: /mnt/temp: not mounted.

    mofo@Muninn:/mnt/temp$

    I must be doing something dumb. Perhaps I cannot access due to the degraded state of the RAID.


    Is there any reason I shouldn't just unmount the RAID array at this point if that's possibly what's preventing me from accessing the member drive?


    I'm a little cautious of trying to repair the file system with fsck with the current RAID in place (but perhaps that's completely unwarranted).

    No, that should not be necessary, ,

    My problem there is that i cannot seem to mount & navigate to the individual drive while it is a RAID member:

    mount: /mnt/temp: unknown filesystem type 'linux_raid_member'.

    So, this is why I was thinking I'd need to break/delete the current array. Again, not my forte...If you could point me in that direction, I should be out of your hair shortly!

    Zitat


    the fact that PMS sees your media albeit intermittent could suggest there is a hardware issue, something that is degrading or failing.

    Makes me wonder if it's not the cheapo SATA card itself...lots of speculation on my part.


    New drive & card are on order. I'd sure like to be able to directly access the supposed healthy drive in the meantime to confirm a few things. I do have a near full backup in cold storage on external drive. But I'm also trying to take the opportunity for a learning experience here which is why I'm still pursuing the working with the existing drive.

    geaves, much appreciate the thorough response!


    I failed to mention a few other specifics (a couple of which are in my signature but I should have added into the details I provided (apologies!): The device is indeed an SBC but does have a PCIe slot which allowed me to use a PCIe 2.0 -SATA III adapter, (but yes, not a controller proper). And you are also correct that I'm using an older version 5.x.

    To be honest, for my purposes, this OMV (I have run 4.x before) has been so damn stable for me I haven't even thought about upgrading, but I guess this will be the impetus for this change.


    I'm sure it will completely shock you to learn that Linux isn't my forte (although I'm not a complete dummy about basic server management). I'm going to proceed with the plan of attempting to recover the healthy drive.


    • If I were to use the OMV GUI, I would first delete the RAID configuration, correct?
      • This will allow me to access the individual member drive. From there I can inspect the drive health.
    • Why might PMS show allow media access & playback (albeit intermittent) when the file shares will not allow any file exploration?
    • How is the array not accessible at all in a degraded state? I've worked with RAID 1, 5, 10 configurations in a production environment (all different flavors of Windows though (physical RAIDs, Storage spaces etc.) and the file system was always still accessible even in a degraded state. So this has me a bit nervous.

    Thank you so very much ,hopefully i'll be off to fix this soon!

    attempting to gather some more information based on post-requirement I may not be able to provide all


    Drives: 2 1TB Samsung 860 EVOs 1TB (Man where does 5 years go!?)

    To my knowledge, nothing has happened. I've not power cycled nor updated for a long long time. the OMV logs I can't pinpoint when the issue occurred. My Plex media was working as recent as of a week ago so sometime between now and then is the best I can place this.



    I guess if this isn't worth it I may just get rid of the RAID and copy the data from the healthy drive (presuming the other drive is indeed toast) to a new SSD and forget the RAID 1 and just use RSync to backup from one SSD to another. Hope I got all the info you are looking for!

    This setup has been running for probably 5 years w/o an issue. Today I'm suddenly seeing this. I cannot use the recover dialog as it appears a valid RAID name is needed?


    • When I went to list the disks, one of them was indeed not listed, when I scanned, the drive reappeared in the list.
    • It appears that the RAID configuration believes one of the drives has failed though.
    • Is there a good way to verify the drive health with OMV?
    • Is it possible the SATA controller (I'm using a separate board for this) somehow lost connection briefly to the drive or possibly the drive lost power enough to do something to the array?
    • I only noticed this as the data was not presenting in my shares. (I figured the RAID1 configuration would still present the data even in a degraded state???).
    • I humbly submit myself here, I'm no where as competent as the rest of you folks.
    • it could be the case I just need to present a new drive to the device so i can rebuild the RAID.

    thank you very much!


    Very interesting, thanks! The 2 partitions on the SD card is a good idea.

    Seems like there's 2 general schools of thought around here about disaster recovery:

    1. Just keep Cloning SDs, when one dies pop in a new one.
    2. Backup OS files (to safe location) for later recovery.

    or something like that....

    Poor title sorry.

    Understanding that any OMV install will likely have growth etc., I'm trying to harmonize my OS image backups for a variety of SD cards and also reduce the time for backups. Let's say my minimum card size is 16GB (I know that actual size varies from card manufacturer to card manufacturer) and I've offloaded my storage/media and Docker installs to another drive(s).

    I have a clean clone of the current install but as we know Armbian will extend itself out over any available space so I want to gpart things down to a minimal size with sufficient headroom.

    I was just use 8GB as my standard but even that seems high. I get there's a lot of factors at play here but anyone who's been at this for a while have a reliable partition size for their clones images? Perhaps my logic is entirely flawed here to begin with...apologies in advance.

    Adoby

    Yeah, I cloned just before I installed Docker so I'll have to reclone here after all this.


    my OCD got the best of me so I ended up changing the directory, (re)installing docker and Portainer, deleting the old folders, and then recreating the PMS container. I feel a lot better now and things are clean. That said PMS on the RockPro is still chewing through scanning the 600gb music library (for the second time this week).


    Onward and upward. More SD cards coming soon for more clones.


    Sorry for the irrelevant thread I started here...I should have just ripped the band-aid in the first place.

    I accidentally created a folder in a location /srv/dev-by-label-[name]/folderName [This became my docker storage location so that is going to be a pain to change].

    Creating file shares using relative path defaults to /srv/dev-disk-by-label-[name].


    Is there a way to create a share from an existing folder by supplying the absolute path?

    so would my path then be:

    /srv/dev-disk-by-label/LookingGlass/DockerStorage

    /srv/dev/disk/by-label/LookingGlass/DockerStorage


    ?? sorry for the dumb questions.