Beiträge von Copaxy

    I am palnning to change my hardware too which is very old.

    I have pentium with mother board in sata 2.

    I have 6 hhd connected and 1 ssd for omv system.


    I saw that you solved your problem.

    I am interessting of the reason to take into account when i will do the change.

    Hi arno,


    well I didn't really solve it, but I just marked it as solved because I was a little disappointed no one had any idea.

    But maybe a little input for you. Before I bought my hard drives I use now, I had older SMR hard drives in total 16TB and even though the drives were a bit broken and some had a 100% failure rate based on SMART. But I didn't want to let them die in a drawer so I thought "Screw it I put them in again as a temporary storage if I just want to temporarily store something big there" So I put them in with the thought "If they die they die". But it seems like they were slowing down the boot process for whatever reason.


    What I did is I plug and play all drives from my old pc to the new one, turned off every stupid HP company setting in the BIOS and that's it.

    Also, I renewed the swap space in case there should be compatibility issues.


    I don't know if it helps you.

    I have been using OMV6 on a HP ProDesk i3 without any problems. I have now switched to a HP Z240 i7 System and based on other OMV hardware change topics I switched all my hard drives and boot drive to the new system and switched off all the not needed remote and proprietary bios settings of HP. The system boots fine it seems like and everything works. I also recreated the swap space in case of a compatibility issue.


    Now the main issue. The system is really, really slow on boot. I mean really slow, like it takes 20-30mins to boot. The actual logs of omv show the boot process is normal fast but the time from powering on the system, the HP logo shows up and goes away and re shows up etc and is like stuck for some time until the real debian boot begins.

    I have gone through the bios over and over again, but I cannot get it to go faster.


    What I have noticed though is that if I take off all drives except the boot drives it is a little faster, almost like the system checks something in the background before boot that takes longer the more drives are attached. But I have no idea what it could be.


    I don't know why it takes so long. I just changed hardware, and it is even also an HP system.


    Anyone has an idea what is going on or some common issues?


    I have attached my syslog.txt in case anyone needs more info, but from what I have seen it seems fine.

    Not quite. I meant what ever your "Public" shared folder is currently called, first change its SMB shared config by setting the public option to "NO" and save & apply the change. Then for that shared folder allow all your users read/write service permissions via the "Storage | Shared Folders | Permissions" screen, save & apply the change. So now when any of your users connect to SMB with their creds, they should have access to only their folder and this now common shared folder. This avoids the windows guest logon problem you had.

    It worked :) I don't know about the long term test but for today your solution worked and windows didn't complain :)

    Not quite. I meant what ever your "Public" shared folder is currently called, first change its SMB shared config by setting the public option to "NO" and save & apply the change. Then for that shared folder allow all your users read/write service permissions via the "Storage | Shared Folders | Permissions" screen, save & apply the change. So now when any of your users connect to SMB with their creds, they should have access to only their folder and this now common shared folder. This avoids the windows guest logon problem you had.


    If you want users to see only those folders they can access, rather than all the folders being shared but not necessary ones they can access, you can add this to the extra options on the "Services | SMB/CIFS/ Settings" tab:

    Ohh, okey. I will try that. Thank you very much for your help :)

    The thing is why I cannot simply give everyone access to everything is because in a testing phase my grandpa accidentally deleted my personal backup folder 😅 He is a bit clumsy and thought he deletes some of his old pictures until I said "grandpa..you deleted my backup folder not your pictures😅"


    Since then, I thought maybe it is better to have some restrictions 😅

    Would changing your "Public" folder to a "Common" share which all users have r/w permissions solve your problem? So for example, a grandparent using their account creds to connect to SMB shares would then see both their "one folder" and the "Common" share.

    You mean instead of having one public folder with no credentials and the other folders with their user credentials, just adding the public folder to their user account?

    Maybe this could work. I have to try this.


    The explorer usually shows all folders but just they have user credentials and if they connect to the right one they only have access to their folders with that user.

    I am not too much of an expert in SMB sharing, but that is what I did in the beginning to make it work.

    Hello,


    I read through some windows smb troubleshoot guides and I didn't manage to solve this.

    I have a OMV NAS and 6 different share folders on SMB. I have my user account and my families user accounts. My user account can read and write all folders. my parents account only some and my grandparents only one folder.


    Now since I build the NAS it has always been troublesome to access the folders over smb because as far of my knowledge the explorer in windows is not designed to have multiple shared folders with different user access credentials. Sometimes it works perfect as intended and sometimes windows, while boot up, connects to my "public" folder which has no user credentials. Everyone can use it, and it is some kind of temporary folder for data. Then I obviously cannot connect anymore to another folder with my user credentials because windows is already connected to the public folder.

    I have not saved any credentials or whatever. My windows credential manager is empty.

    I also currently have no other solution to separate the folders to not have the multiple user credential mess.


    Does someone know a better solution to this setup or to solve the multiple user credential, share folder error windows is throwing?

    But I also cannot simply give every folder the same permissions.


    It was okey until now but sometimes, especially recently, it is just annoying.

    There are really only two things you have to worry about with docker networks:

    1) What is the network name and what containers do you have on it

    2) Does a container require a LAN ip instead of a docker ip (Things like a pihole container require this and as such need to be attached to a macvlan)

    Okey understand.



    Pruning will not remove anything that is in use by a container, but it can remove things that are left behind after a container is shut down because the down command removes the container's dynamic data.

    Ah, okay so the dynamic data gets removed.


    Thanks a lot of the useful information. I think this will solve a lot of network issues in the future. 😊

    If the network were to get pruned, the network gets re-created on launch.

    One question related to docker network prune. You said if I prune a network, by accident it gets recreated after restart. Will the configuration also be recreated, or is the container done, and I have to reconfigure it. Because once I have accidentally pruned my nextcloud network and everything was messed up. I had to reconfigure it.

    Question:


    How are you deploying your containers? Compose plug-in, portainer, cli?


    If using the plug-in or portainer stacks, you can use the network commands in the compose files to easily create and attach to different networks. Containers will automatically create and attach to a network on launch. If the network were to get pruned, the network gets re-created on launch.


    I have 30+ containers running but I like to group them on different docker networks based on purpose, using a combination of the network commands and multi container compose files (I use portainer) ie. web accessible stuff is on a “web” network, admin related stuff on an “admin” network, central database related stuff on a “database” network, etc. I don’t do this for ip limits, but more for organization and a little increased isolation of web accessible containers.

    Most of my containers are deployed as docker-compose stacks. Yes I have to modify their network and group them up, you are right. I just lacked the reason why to group them. Now I know it 😅. I don't want to reconfigure some containers but I guess I have to. Thanks for your help and knowledge.

    I should also note that the subnet mask on a network dictates the the number of available up addresses. A /24 network can handle 256 ip addresses (first and last are usually reserved so 254 usable) a /16 network can handle 65534 addresses, so I highly doubt you are running out of ip addresses unless you have been messing with something that has broken the networking.

    The reason why I think reconfiguring their network makes sense is because some containers will never use so many IP like in a /16 network, but they block the IP range for other containers. So I think I will create a small network for containers who do not use so many IPs. Previously I was a bit wasting IP range because I never assumed I will have so many containers. Turns out I was wrong and I need them. 😅

    Nevermind I do it on the go.


    docker network ls presented me a lot of individuall /16 networks. A lot of my containers have no specific network settings so they will always try to use the default docker network 172.16.0.0/12 and create smaller /16 networks. And I guess the default docker network is out of IP addresses, so I needed to create a new specific network for my new containers. For example a "container31_network" with the range 172.32.0.0/24.


    I think the default network has no space anymore.

    1) I don't think you want to manually modify daemon.json to configure networking -- there is a series of commands via docker network that you should probably execute from shell instead.


    2) All of your running Docker containers are already assigned addresses & subnets -- I don't think manual/forced changes to the network while they're running will end well! In a "best case scenario" they might re-connect to a new network scope after a restart... but if they don't you'll lose IP connectivity to your existing containers.

    Yes true you are right. That is not a good idea to configure it like that in the file. Good that I have not done it yet.


    3) In what network do you suspect you've expended all available IP leases?(!) For example "bridge" or "host"? By default Docker networks are Class B ranges, which in theory can accommodate 65,534 hosts per network! How many containers are you running? 8| haha

    I have around 30 containers and I get the message that there are no available IP addresses left for new containers.

    Based on docker network ls most of my containers are on the bridge network. So I guess that it is full. So I have to create a new one I guess? I would like to know what range the default bridge network is because just 30 IPs available seems very less.

    I am still learning docker networking so I still need some help.

    I have reached the maximum amount of Docker containers of my host machine and have no free IP addresses for new containers.


    For this I need to add another network to docker and I have found a post about that. I have to edit the /etc/docker/daemon.json file. But obviously it gets reset after a restart of the host machine.

    My question: Do I now have to find a solution to make the above configuration permanent or if I also can create a network myself, how to add it to the default docker networks that docker knows it can use its IP addresses to create new containers.

    Mhm..my guess would be the ssh client because the container does not provide one. But it could use the host (server) ssh client.


    Other tutorials don't mention anything complex to set up ssh. They just go to settings and set up the credentials and that is it. I do the same but it does not work. Also, the RDP connection works, so the container should be able to communicate to the outside. I do not get why the guacamole container seems to not use my host ssh client to connect to an other device because if I set up the connection to my host it works.


    I could set up a ssh client container maybe and use it as a client for guacamole? But this would just be a workaround.

    What I want to achieve -> Get ssh access to my RPI from my guacamole container on my server.


    What is the problem -> guacamole container is accessible and is able to make a RDP connection to a windows pc on my network but no ssh connection to my RPI. Doesn't matter what device on my network I want to get a ssh connection, it will not work.


    The only ssh connection that works on the guacamole container is to the host server. But anything outside the server (everything else in my network) there is no ssh connection possible.


    I feel this is a bit weird because if the container would not be able to connect to a device outside the host server, the RDP connection to a windows pc in the network would not be possible. So the question is, what is causing the ssh connection issue.

    Yes I have ssh over port 55 accessible.

    The container has a port opening on 8222 and is accessible over browser and RDP to windows connection also works but why the ssh connection not.

    sshd is running and it will connect to any pc in the network but not to my guacamole container. I have checked username password, everything is correct. I mean my server is also in the network so I should not have to open ssh over a remote connection.

    I have set up a guacamole container and want to connect via ssh to a raspberry pi 4 with adguard on it. It doesn't work and when I try to understand why I get other problems.


    So my guacamole container is reachable by my host server (I can ping the container) and everything on my server. My server can reach other devices over the network and ping works.

    When I try to ping my Pi with adguard my Pi is reachable.

    When I now try to ping my guacamole container on the server from my Pi with adguard in my network via the container ip 172.18.0.2 or via the server IP with port 192.168.xxx.xx:8222 it is not reachable (100% packet loss). I can reach the guacamole container over 192.168.xxx.xx:8222 in browser but I cannot ping the container and ssh also won't connect.

    I have looked into the firewall rules, but I haven't changed anything from the default. How am I able to reach my guacamole container? I have searched many tutorials, guides and posts, but I haven't got any idea on why it is not working.


    any ideas?