Beiträge von Copaxy

    A quick update and request for help too.


    I don't know what changed in the race condition of when the driver of the sata hat loads but since a few days it mounts everytime without problems. So it seems like the kernal is not so busy in the begining and my drives load fine.


    But there is a different problem.


    I changed the power supply to one with 96W so it has more then enough power. No more power peaks that cause the drives to go missing hopefully. But when OMV starts and everything is fine at first then suddenly the drives go missing again but based on the logs it is because there is a superblock issue.


    I also saw a snapraid content file issue when i run a snapraid status. I renamed the snapraid content file that caused he problems so it "should be fine" -> There was also data missing in the bad one.


    Now this superblock issue...

    In the end of the logs file it searches for each drive.


    Is there someone with an idea?

    syslog (4).txt

    samba is a service that is waiting for local storage to start. If the disks aren't ready, it doesn't bother it.

    ah understand.

    mergerfs is storage that tries to wait for disks to mount pools. If the disks aren't ready, it can't mount pools and fails. There is no concept in systemd of waiting for a driver. Nothing I can do about this. The raxda hat would 1000 times better if it used an in-kernel driver instead of the weird "driver". That is why I don't use my hat.

    ok true, i don't want to force you to do something about it 😅 i just wanted to understand that's all. :)


    The behavior is inconsistent because it is a race condition. Sometimes the driver must be loaded earlier in the process. I stopped looking at it a long time ago. If someone (not me) finds a fix, I will try to add it but I'm not optimistic.

    Ah okey it is a race condition, then it is more obvious why this happens.

    So basically it is gambling, if i am lucky the driver loads earlier and the disks are mounted and if not then the disks are not mounted in time and mergerfs does what it did like in my case.


    Yeah i also would prefer they would have an in kernal driver but it is how it is 🤷‍♂️.


    Maybe i find a way around this someday or never.

    The problem is the sata hat. That is why mine is a paperweight. The "driver" that allows the drives connected to the hat to be visible to the OS loads too late for systemd. I don't have any way to fix that in the plugin.

    ahh okey understandable, but then i have another question.

    Why when i don't set any sharefolders and smb connections the drives work rock solid and as soon as i add a mergerfs pool and sharefolders it is gambling? When i last time reset the sharefolders and mergerfs pool it worked first fluently, then after a few reboots and days it startet to get buggy any then completely stopped working. Only after repeating the whole process it works again for awhile.


    I am not complaining, it is just i don't get it... if it is as you are saying and the "driver" loads to late for sytemd then wouldn't they never load properly? Then the driver would be always too late and never in time.

    But the thing that confuses me is that it loads properly but after mergerfs then suddenly not, so i am thinking what leads to that the driver is loaded too late...

    Or maybe something is delaying the driver loading speed that after the settings i put in it suddenly is to slow or what?


    mhm


    any ideas why the behaviour is so inconsistent?

    from systemd file for the mergerfs pool

    config.xml


    All UUIDs seem correct and everything looks normal but still no clue why the error occours and omv cannot mount anything

    Interesting things i found in the sys logs


    Code
    Sep 12 12:43:52 MeinNAS chronyd[885]: System clock wrong by 36294.544873 seconds
    Sep 12 12:43:52 MeinNAS systemd[1]: Starting Clean php session files...
    Sep 12 12:43:52 MeinNAS chronyd[885]: System clock was stepped by 36294.544873 seconds

    OMV suggests me to use the nonempty mount option on the mergerfs pool but since it is not in the fstab anaymore, how can i tell omv that while booting?

    Any ideas why the disks fail to mount?

    Okey sorry, but i have to reopen the thread.


    I don't know what it coused but since today OMV doesn't recognize my file systems anymore :/


    Code
    Couldn't extract an UUID from the provided path '/srv/mergerfs/MyNAS'.
    a few seconds ago

    It also sometimes says something about that it could not find the UUID in fstab. But did not he way of mounting drives got changed so it works without fstab?


    I am confused what happened or what is causing the problem....


    Also another thing is when i click on some pages of OMV it shows a error for old cached data and needs to be reloaded. Somethimes i can not open a page because it always needs to reload.

    It only takes one drive to cause serious problems. A drive can fail in ways that can reset a PC or SBC. Intermittent power shorts could do it.

    For future reference, that kind of issue can be isolated by disconnecting all drives (or other connected peripherals) and reconnecting them one at a time. It can be an exhaustive process, especially if the problem is intermittent, but there's no other way to be sure.

    Yes true.


    I did test every drive one by one and recofigured OMV6 step by step, that is why it took so long to test. But all drives had problems one day or another, so in the end i had to switch all 4.


    One drive is still okey sometimes, so i use it as a external drive on pc for quick file transfer. No need to worry then when it will fail completely.

    Zitat

    Let's assume the suppliers knows what he is doing. Then if they have a possibility to ground, it is needed. If they don't have the possibility it is not needed. Metal cases need to be grounded, plastic cases not.

    Ah okey. Interesting.


    Thank you for the information :)

    If done correctly it is ok.

    On the AC side be aware of the 230V. 230VAC can cause serious harm. Also take care of grounding (yellow/green wire)

    On the DC side be aware of the polarity (+/-).

    And make sure to select the correct voltage.

    Okey that sounds good.

    Yes i am aware of the 230VAC and the polarity.


    I would like to go into more detail with the grounding. Because a SCHUKO cable has 3 seperate wires (L Phase, N Phase and Ground) but a DC transformer just has 2 inlets and outlets. So the L and N Phase should be no problem, just connect them correct as you said, but what about the grounding? How to i propper ground the whole unit?

    Previously i only used normal euro connectors without ground for less then 5A devices so i have not so much experience with grounding on SCHUKO cables.

    That would be great. All too often, when users figure out what's wrong they disappear. Real world feedback for these odd issues is useful info for all.
    Thanks.

    Sooo. Sorry it took so long but i figured things out i guess.


    Solution for the frequent restart problem:


    i bought 4 new HDDs and it fixed the problem. I guess the drives where on the edge of its lifetime. My gess is that my drives suddenly went missing and OMV was not able to remount them so it restarted but it was unable to find them again. Also i made sure that no sdcard was faulty.


    This solved the problem for me.

    I have one more question for the electronic experts here.

    Since it is not so "easy" to find a propper dc round connector power supply with like 80-90W, does it make a difference when i just take a normal SCHUKO cable, put it on a DC transformer which is capable to deliver 90W DC and connect it to a dc round cable?


    I used one for a external fan setup with an noctua NA FC1 4Pin Fan control unit and also for lamps or leds.

    So basically is it okey to build a power supply or bad idea?


    suggestions?

    Which HAT version do you have? v1 or v2?

    Where i can see which version i have? i had it quite early when it got released so i believe it should be version 1, but i am not sure.


    If you have the v2, you're using a 12V power supply that feed both the HAT and the Pi (as it should be).

    If 60w (5A) isn't enough, get a power supply with more juice (75W or 90W).


    I don't think it's a good idea to put 2x power supplies on that rig at the same time (if I read it right, 1xUSB-C on the Pi itself and the other plugged on the HAT. You're putting power on the GPIO from 2 distinct sources):

    yeah true i am already looking for one with more juice.

    Yes i know this is probably not the best solution but i just thought then the pi can take its juice from the pi power supply on the usb c and the other bigger power supply can power the disks.


    I measured the power delivery when using the pi itself and with the HAT.

    When i use just the pi it takes about 12W in idle and when i use it with the HAT and both power supplies, the pi power supply on the usb c port takes around 10-12W and the bigger HAT power supply on the (dc round connector) takes around 29-38W. The pi power supply is quite stable on point on what the pi usually consumes.


    Since i don't have a new power supply, i will continue with both power supplies but when i buy a new on i get one with more power.

    Update:


    Since the NAS is now online since 4 days without interruption, i believe it was the power issue. For now it runs stable.


    Thank you chente for trying to narrow the issue down. I just tried the additional power supply as a test so it was more of gambling but luckyly it worked :).

    Are the hard drives connected directly to the PI or is there an intermediate power supply?

    there is a sata hat inbetween with an additional power supply.


    But i think i maybe found the issue. I have one 60w power supply for the sata hat what also can power the pi itself but after the last mount issue i put the standart pi power supply additionally. So now i have two power supplies that power the pi and the HDDs.


    Nothing is jet guaranteed but it is successfully online for more then one day then 6h so maybe if i am lucky it was just the high peak power draw that caused the drives to be unmounted. But i am still testing.

    Hi

    i have OMV6 on a Raspberry pi and i have switched my HDDs with new ones and the switch was successful but there is one problem occuring with mergerfs.


    I have created a pool of 3 HDDs and the pool works fine but after for example lets say 6h OMV shows me a error that the filesystem could not be mounted but it was there a few seconds ago. The drives are 100% new out of the box.


    Any idea why the pool disappears and then after reboot and restarting the pool the pool appears again?


    In the syslogs i found mounting errors or something but i don't really know what coused the mounting errors.


    Has someone with better knowledge then me a idea?

    I don't know anything about Lexar but I do know that SanDisk and Samsung manufacture their own cards where other brands may contract them out (usually to the lowest bidder) and put their label on them. Here's -> a comparison of Lexar to Samsung.

    Here's info on -> Building OMV6
    A utility that tests SD-cards in -> prerequistes. -> h2test_1.4.zip
    (Note the minimum A1, Class 10 recommendation for the SD-card.)

    And -> the SD-card test procedure.

    Thank you very much for all the help.


    I get your point. I don't want to be biased but i had a lot of SanDisk SD-cards and the bigger ones for my DSLR are okey

    but the MicroSD-cards... Maybe i was just unlucky but to be honest, out of around 28 SD-cards mostly all from SanDisk and two from Samsung,

    they quite dissapointed me a lot. Broke when i did not want them to breake and they almost never kept up their speed, never.

    Maybe i was just unlucky but i don't trust them so much as before. To Samsung i cannot say so much for now. One faild 3 weeks after purchase and the other one performs good but i would have expected better latency, way better.


    My lexar does perform good i think. They mention 90mb/s write peak and i get 85mb/s for 2h straight, so i would say it is okey.

    I know Samsung and SanDisk do them their own but i cannot change my experience with them sadly. Maybe they suprise me in the

    future. The Lexar cards i now own (two) were just a test but actually they suprised me positively. But who knows, everyone makes different experiences.


    I don't want to praise or smash any manufacturer, just give me working cards that do what they should or promote then i am happy.


    As for the testing, i will do that to the others too.

    I yesterday tested my lexar one and h2test was good and i verified my installation with win32 diskmanager and also everything was fine.


    I will further proceed, but this may take awhile.


    If you have an O-scope or one of the more high end multi-meters, it might have an ripple function for testing switching supplies. (I tend to doubt you have one of these items.)

    This is one of the problems with toy hardware. When I originally bought R-PI's, I bought two of them, with 2 power supplies and 2 SD-cards. That's the only way to have some certainly when using consumer hardware in a server role. It's helpful to be able swap components versus guessing at what might be wrong. And I've dealt with the headaches associated with buying generic SD-cards. (The generic's worked for awhile, then did bizarre things before they finally failed, early I might add. Lesson learned - never again.)

    I still have Arm devices, but my OMV main server and first backup are on server grade hardware.

    No i don't have one :( but good to know how to test it.


    Just a small note: I am still a student and sadly don't have thousands of euros laying around, otherwise you can be sure of

    that my server would be based on server hardware :) So please keep that in mind before ranting about RPis.

    I already made a thread about a possible new server when i have the money (a few month ago). I can assure you that i would like to put the Pi aside but at the moment i cannot so i have to deal with that thing and have a small tiny server or none.


    No bad blood, just a note.


    Also note that the above could be coincidental. Hardware can, and will, fail over time. Nothing is perpetual especially if you're running the R-PI without an UPS or a surge suppressor.

    Sure, maybe the day of the day has come for my Pi.


    Of note, as mentioned before, Raspberry PI OS may change without notice. I ran into an issue with building OMV5 on R-PI OS Lite that (as I remember) was related to networking. They did something that appeared to be, well, foolish. Then, out of the blue, the issue disappeared in the next minor update. Stuff like that happens.

    true. Happens.


    That would be great. All too often, when users figure out what's wrong they disappear. Real world feedback for these odd issues is useful info for all.
    Thanks.

    Yeah true but i will post my conclusions if i get one. But it may take awhile. :)



    And if, from what I recall, you had installed a lot of stuff (removing/uninstalled afterwards), just start fresh with a blank SDcard and rebuild the server.

    Although OMV5 is almost EOL, better to have it running with zero errors (on OMV5) than fighting with OMV6 with flaws.

    Maybe in the meantime, RADXA will have an RaspiOS Bullseye arm64 update (make some support request on their forum, perhaps)

    Yeah i am really considering to switch back to OMV5 . If i will get no conclusion out of my tests then i switch back.



    On a sidenote, after some testing, I feel Pis don't do well with Jellyfin:

    I tried to have it on my network, one Pi4 (4Gb) bullseye arm64 running docker jellyfin and serving several clients on the house.

    It streammed OK "low bitrate" files but would struggle for Hi-quality MKVs.

    And this was with only 1 client connected.

    They are no powerhouse for sure. I managed to stream 2x 4k mkv one on my phone and one on my parents 4kTV without stuttering but with hardware accleration. Of course real server hardware is better :) and i count the days until i can take the Pi to its retirement.