AnonymousDeity
I’m from small town VT just across the river, and while that hasn’t been my personal experience I absolutely have known people who have had that experience. Small town USA is unbelievably messed up, but everyone likes to act like its only in the cities. Felt like half my town was Section 8, and an actual full 10th of the population was from the county jail on furlough. Since I’ve left, violent crime/murder rate has gone up a lot and heavy drug use is rampant.
I’m sorry dude.
ah, yeah, that’s why. You need to mount the unix socket into Caddy’s container as a volume. Docker uses overlayfs by default to create a layered filesystem, and then launches a distinct user, process, network, etc. namespace for the container’s process, which is why everything is isolated inside the container. You’ll need to make sure the unix socket is available to Caddy’s process inside the container, so you’ll have to mount it using -v
or the volume
key in the yaml.
sudo
is actually entirely unnecessary with Docker, because most containers will run as the container’s root. Part of containers having their own user and process namespace means their root user is not your root user (technically we can have a debate about semantics for overlayfs and mounted files), and almost all images will ship with the default user as their root. Therefore, almost all processes will be “run as root” from within their container by default, meaning sudo
does nothing except elevate the perms for the user calling docker
. It would really only get around an issue with your user account not having access to docker
or the docker daemon (also via socket btw). That said, because of the user namespace thing, running sudo docker run
or sudo docker compose up
doesn’t actually guarantee the process in the container is run as root… just that the container was created as root with perms over the host’s system.
The important part is that Caddy inside the container will be run by a user that has permissions over the mounted socket.
nginx just has a lower barrier to entry (imo) if you’re not looking to sign your own certs. Caddy is great for that.
That said, I didn’t know Caddy had a beta feature for serving Tailscale certs automatically. So I incorrectly thought you were barking up the completely wrong tree, which you apparently are not. I’ll look at your tech details more.
{“level”:“error”,“ts”:1691499478.2793655,“logger”:“tls.handshake”,“msg”:“getting certificate from external certificate manager”,“remote_ip”:"100
.125.48.40",“remote_port”:“60140”,“sni”:“machine.domain.ts.net”,“cert_manager”:0,“error”:"Get "http://local-tailscaled.sock/localapi/v0/cert/vaulty.tail
a5148.ts.net?type=pair": dial unix /var/run/tailscale/tailscaled.sock: connect: no such file or directory"}
This is your main issue - looks like Caddy can’t access the tailscale socket in order to serve their TLS cert. check you’re running caddy>2.5, check the socket exists and check the user running the caddy process has access to it. docs
Are you running Caddy with docker?
I read your comment in more detail, you’re going down the wrong path. What you’re looking for cannot function the way you want the way you want to achieve it, and may not even make sense to want. I am wrong, I didn’t realize Caddy could just serve their cert over the socket. What user is the caddy process on your VM being run as?
If you want to use Tailscale DNS, you can use their TLS cert (assuming it gives a valid cert for machine.domain.ts.net
) and just reverse proxy HTTP traffic with nginx on the VPS/VM (assuming nginx can listen on their network device. I’ve fought with that with openresty before, but that may be because I was trying to host it in another docker container lol).
Use docker compose. This will feed DNS records into the containers’ /etc/hosts file as well as put the containers on their own network so the main containers won’t be exposed directly, only caddy.
I’m at work but can share an example in a bit.