Posts by getName()

    It looks rather strange. The cgroup limitations should not exist by any means. Did you try some kernel hardening or self compiled kernel? Did you mess around with kernel modules (rmmod)?
    Before reinstalling omv, try to install another kernel (for example proxmox).
    If this does not help, just set docker to all defaults.
    Edit: About killing it, the systemd file is default with restart. So if systemd gets code 0, it simply restarts the contrainer. systemctl stop docker would have stopped the docker deamon.
    I just realized, it is too late now, I am very certain though, that a reinstallation was unnecessary. In fact, at 99.99% of problems using GNU/Linux there is no need to reinstall the system completly. You can touch and manipulate everything, its not a black box like Windows.

    I am not intending to be disrespectfull by any means, but no, you obviously dont understand it.
    Of course it is an internal network on the host, still it is bridged.
    If I understand correctly, you want some kind of net=host with additional assigned ipv6 on the hosts network device (which one by the way, all?) which than get redirected to the docker. How can this not break security and, arguebly even more important, concept? Docker is designed for environments, ot easy software installation as it is used successfully by many here. Think about the influence on the hosts network stack. Docker is intended to have minimal to no effects on the host.
    I commonly use ipv6 in many environments these days, but I see absolutly no gain in docker. You may need to look for something else, although I dont know what does deliver what you are looking for. Docker, lxc, singularit, they all dont afaik.

    Docker is designed for bridged networking. The way it works is not that the docker ip is reachable from the outside. So ipv6 is completly unnecessary, as you wont put more dockers in a single docker net than ipv4 can deliver.
    What you want is delivered by some proxy and should be handled that way for many reasons. What you want would brake concepts and security on many levels. I think you missunderstand what docker is designed for.

    The only other way I can think of right now is to pull the sysdrive and chroot into it on another system where you mount it.
    ssh-keygen is an easy option to generate keys.
    But first look if the old one is still around and look for owner and rights. In the ssh config file should be a line with keyfile or something similar. The path is defined by this parameter.

    Did you remove the .ssh folder? You may have removed the hostkey. Try to copy the folder as backup and generate a new hostkey. Also check rights and ownership. Maybe you did some chmod -R in a directory above the ssh host key?

    It basically tells you what to do. Your filesystem needs to be repaired, and thus you need to perform a fsck. Do you have a working GNU/Linux machine where you can plug it into?
    It may however leave you with a partial loss of data on sda1.

    I do agree on that one, smart is of great value on ssds, still I have seen ssds fail randomly, also some in data center grade quality. I think we should leave the discussion for now. I agree in general, that the usage of raid1 is low.

    Right, I am aware of this, but its still not worse than no raid, is it? For my privat use its less about real availability and more about saving time when a drive fails. I am totally aware its not a real layer of security and I do habe backups too.

    Actually, most times yes. I know its not a good layer of security but it already saved a lot of time for me in different systems, especially if I can just let them keep running in degraded state until I find time to fix it. I try to use checksumming fs for it, when possible, which at least helps against silent corruption in raid1.
    I know a lot of issues, but in my opinion it mostly comes down to not being as secure as many people think, still it may be helpful at some point. Do I miss something severe, like a real downside despite additional hardware?

    "Pretty much" ?? ;-(


    The new system has installed the ssd disk (raid1) by switching off the data disks.
    The installation went smoothly but the system does not start -
    the message array was not found even though the disks are already connected and visible in the bios.

    I ment that the way you described it is the way to go, sorry if I accidentaly shoked you. @macom propably gave you the right hint.


    @macom: do you think raid 1 as sysdrive is bad (ignoring the two disk the raid 10 option)? I run it like that too, ssds are so cheap I dont mind to have an additional one running.

    Still cuda uses the very same hardware resources. There is absolutly no issue to docker a desktop environment completly as long as it is the only desktop environment running and you give access to devices.
    I did build docker containers to run a hardware accelerated vpn on cluster hardware for example. This is used for stuff like paraview and such, where a lot of memory on the graphic card is needed and runs perfectly fine on Tesla V100, no x server installed on the host, just the necessary kernel modules need to be there. As it is easier for the users, I installed a complete desktop environment and shared it. 3D-Benchmark showed about 4% less performance than bare metal on k80 and about 3% on v100. The user experience is absolutly the same as bare metal.
    I do agree however, that @engrsameen needs to understand docker and most likely it is not a good idea to use it. Using some kvm enviroment sounds much more like what he is looking for. Maybe he should have a look at proxmox.