Hi guys! I’m going at my first docker attempt…and I’m going in Proxmox. I created an LXC container, from which I installed docker, and portainer. Portainer seems happy to work, and shows its admin page on port 9443 correctly. I tried next running the image of immich, following the steps detailed in their own guide. This…doesn’t seem to open the admin website on port 2283. But then again, it seems to run in its own docker internal network (172.16.0.x). How should I reach immich admin page from another computer in the same network? I’m new to Docker, so I’m not sure how are images supposed to communicate within the normal computer network…Thanks!

  • @iturnedintoanewt@lemm.eeOP
    link
    fedilink
    English
    78 months ago

    Sure…But proxmox is already there. It’s installed and it runs 5VMs and about 10 containers. …I’m not going to dump all that just because I need docker…and I’m not getting another machine if I can get use that. So…sure, there might be overhead, but I saw some other people doing it, and the other alternative I saw was running docker on a VM…which is even more overhead. And I fear running it on the proxmox server bare metal, it might conflict with how it manages the LXC containers.

    • chiisanaA
      link
      English
      58 months ago

      Docker inside LXC adds not only the overhead they’d individually add — probably not significant enough for it to matter in a homelab setting — but with it also the added layer of complexity that you’re going to hit when it comes to debugging anything. You’re much better off dropping docker in a full fledged VM instead of running it inside LXC. With a full VM, if nothing else, you can allow the virtual networking to be treated as it’s own separate device on your network, which should reduce a layer of complexity in the problem you’re trying to solve.

      As for your original problem… it sounds like you’re not exposing the docker container layer’s network to your host. Without knowing exactly how you’re launching them (beyond the quirky docker inside LXC setup), it is hard to say where the issue may be. If you’re using compose, try setting the network to external, or bridge, and see if you can expose the service’s port that way. Once you’ve got the port exposure thing figured out, you’re probably better off unexposing the service, setup a proper reverse proxy, and wiring the service to go through your reverse proxy instead.

      • @iturnedintoanewt@lemm.eeOP
        link
        fedilink
        English
        18 months ago

        Thanks! When I type my LXC’s IP:2283, I get unable to connect. I checked the docker-compose.yml and the port seems to be 2283:3001, but no luck at either. Is there anything that needs to be done on docker’s network in order to…“publish” a container to the local network so it can be seen? Or any docker running with a port can be reached via the host’s IP with no further config? Checking the portainer’s networks section, I can see an ‘immich-default’ network using bridge on 172.18.0.0/16, while the system’s bridge seems to be running at 172.17.0.0/16. Is this the correct defaults? Should I change anything?

        Thanks!

    • @earmuff@lemmy.dbzer0.com
      link
      fedilink
      English
      28 months ago

      Add a new VM, install docker-ce on it and slowly migrate all the other containers/vm‘s to docker. End result is way less overhead, way less complexity and way better sleep.

      • @iturnedintoanewt@lemm.eeOP
        link
        fedilink
        English
        28 months ago

        Thanks…So you think a full VM will result in less overhead than a container? How so? I mean, the VM will take a bunch of extra RAM and extra overhead by running a full kernel by itself…

        • @earmuff@lemmy.dbzer0.com
          link
          fedilink
          English
          18 months ago

          I was assuming you were able to get rid of the other 5 VM‘s by doing so. If not, obviously you would have not less overhead.

          • @iturnedintoanewt@lemm.eeOP
            link
            fedilink
            English
            1
            edit-2
            8 months ago

            Yeah, the ones being VMs cannot be transferred easily to containers…I would have done so over to LXC, as it’s been my preferred choice until now. But Home Assistant was deployed over a VM template provided by HA, and the windows VMs…well, they’re Windows. I also have an ancient nginx/seafile install that I’m a bit afraid to move to LXC, but at some point I’ll get to it. Having Immich for pictures would reduce a bit the size of some of the Seafile libraries :)

            • @earmuff@lemmy.dbzer0.com
              link
              fedilink
              English
              28 months ago

              My HA is running in docker. It is easier than you might think. Forget about LXC. And just take your time migrating the stuff and only when the service works in docker, you can shut off the VM. Believe me, management of docker is way easier than 5 VM‘s with different OS‘s. Docker Compose is beautiful and easy.

              If you need help, just message me, I might be able to give you a kickstart

    • @vzq@lemmy.blahaj.zone
      link
      fedilink
      English
      2
      edit-2
      8 months ago

      You are absolutely free to fuck yourself over by using a niche option plagued by weird problems.

      Or you could, like, not do that.

    • @grehund@lemmy.world
      link
      fedilink
      English
      28 months ago

      Jim’s Garage on YT, he recently did a video about running Docker in an LXC. I think you’ll find the info you need there. It can be done, but if you’re new to Docker and/or LXCs, it adds an additional layer of complexity you will have to deal with for every container/stack you deploy.

    • Scrubbles
      link
      fedilink
      English
      1
      edit-2
      8 months ago

      I did it that way for years. It’s not worth the hassle my man. I did the same, told people that it’d be fine, that it was more performant and so it was worth it. But then the problems, oh lord the problems. Every proxmox update brought hours or days of work trying to figure how how it broke this time. Docker updates would make it completely bork. Random freezes, permission errors galore. I threw in the towel on it, figuring I was hacking it making it work anyway.

      Now I do vms on proxmox. Specifically I swapped to k3s which is a whole other thing, but docker in vms runs fine for how much less annoyance there is. Selfhosting became a lot less stressful.

      Learn from our mistakes, OP