How are y’all managing internal network certificates?
At any point in time, I have between 2-10 services, often running on a network behind an nginx reverse proxy, with some variation in certificates, none ideal. Here’s what I’ve done in the past:
- setup a CLI CA using openssl
- somewhat works, but importing CAs into phones was a hassle.
- self sign single cert per service
- works, very kludgy, very easy
- expose http port only on lo interface for sensitive services (e.g. pihole admin), ssh local tunnel when needed
I see easy-RSA seems to be more user friendly these days, but haven’t tried it yet.
I’m tempted to try this setup for my local LAN facing (as exposed to tunnel only, such as pihole) services:
- Get letsencrypt cert for single public DNS domain (e.g. lan.mydomain.org)… not sure about wildcard cert.
- use letsencrypt on nginx reverse proxy, expose various services as suburls (e.g. lan.mydomain.org/nextcloud)
Curious what y’all do and if I’m missing anything basic.
I have no intention of exposing these outside my local network, and prefer as less client side changes as possible.
You should be able to do wildcards with acme V2 and a dns challenge: https://community.letsencrypt.org/t/acme-v2-and-wildcard-certificate-support-is-live/55579
You would manage internal dns and would never need to expose anything as it’s all through validation through a TXT record.
You could use also something like traefik to manage the cert generation and reverse proxying:
Fellow Caddy user here. I’d love to set that up. Can you share your Caddyfile or at least the important snippets?
I have public wildcard DNS entry (*.REMOVEDDOMAIN.com) on Cloudflare on my primary domain that resolves to 192.168.10.120 (my Caddy host)
Caddyfile
{
email EMAILREMOVED@gmail.com
acme_dns cloudflare TOKENGOESHERE
}
portal.REMOVEDDOMAIN.com {
reverse_proxy 127.0.0.1:8081
}
speedtest.REMOVEDDOMAIN.com {
reverse_proxy 192.168.10.125:8181
}
Certbot in cron if you’re still managing servers.
I’m using cert-manager in kube.
I haven’t manually managed a certificate in years… Would never want to do it again either.
Probably not the ‘recommended’ way, but I use a selfsigned cert for each service I’m running generated dynamically on each run with nginx as a reverse proxy. Then I use HAproxy and DNS SRV records to connect to each of those services. HAproxy uses a wildcard cert (*.domain.tld) for the real domain and uses host mapping for each subdomain, (service1.domain.TLD).
This way every service has its traffic encrypted between the HAproxy and the actual service, then the traffic is encrypted with a browser valid cert on the frontend. This way I only need to actually manage 1 cert. The HAproxy one. Its worked great for me for a couple of years now.
Edit: I’m running this setup for about 50 services, but mostly accessed over LAN/VPN.
I use the linuxserver.io SWAG container. It runs an nginx reverse proxy and does certificate management for you. It’s a pretty great minimal-config option.
I use NPM (Nginx Proxy Manager) to handle all my reverse proxying and SSL certs. Authelia easily ties in to handle my SSO. What a time to be alive!