Avatar

macgregor

macgregor@lemmy.world
Joined
4 posts • 30 comments
Direct message

Fyi As a (non-lemmy) backend developer, this is completely normal/standard use of IP addresses in a system not designed around harvesting your personal data. IP addresses are commonly used for efficiently and securely (security for the server more than you) handling active (inflight) requests so you generally only see it in specific network logs like those of the reverse proxy, not stored long term in a DB. Most of us who aren’t in advertising or government want to know as little about you as possible.

Being privacy mindful is good, but it is a deep and creepy rabbit hole to go down. Stay safe out there 🙂.

permalink
report
parent
reply

Generally a hostname based reverse proxy routes requests based on the host header, which some tools let you set. For example, curl:

curl -H 'Host: my.local.service.com' http://192.168.1.100

here 192.168.1.100 is the LAN IP address of your reverse proxy and my.local.service.com is the service behind the proxy you are trying to reach. This can be helpful for tracking down network routing problems.

If TLS (https) is in the mix and you care about it being fully secure even locally it can get a little tricky depending on whether the route is pass through (application handles certs) or terminate and reencrypt (reverse proxy handles certs). Most commonly you’ll run into problems with the client not trusting the server because the “hostname” (the LAN IP address when accessing directly) doesn’t match what the certificate says (the DNS name). Lots of ways around that as well, for example adding the service’s LAN IP address to the cert’s subject alternate names (SAN) which feels wrong but it works.

Personally I just run a little DNS server so I can resolve the various services to their LAN IP addresses and TLS still works properly. You can use your /etc/hosts file for a quick and dirty “DNS server” for your dev machine.

permalink
report
parent
reply

Many databases or database clients have an “upsert” operation which is exactly this. Create or update this entity. If the DB supports it you can save an explicit lookup giving minor performance and code cleanliness improvements in application but might shift that performance cost to the DB (had to rollback a prod change not too long ago because someone switched to a PG upsert and it caused average CPU to rise, haven’t gotten a chance to investigate why yet, something about indexes probably).

Anyway, I tend to start with just explicit create and update methods and add an “upsert” abstraction if I find myself sprinkling lots of checks around making code messy. So I would go for “createOrUpdateFoo” in that case.

permalink
report
reply

So I have jellyfin deployed to my kubernetes home lab, router port forwarded to the ingress controller (essentially a reverse proxy) on the cluster. So exposed to the internet. Everything on it has authentication, either built in to the application or using an oauth proxy. All applications also have valid SSL configurations thanks to the reverse proxy. I also use cloudflare DNS with their proxy enabled to access it and have firewall rules to drop traffic that hits port 80/443 that doesn’t originate from those cloudflare proxy ips (required some scripting to automate). It drops a lot of traffic every day. I have other secuirty measures in place as well, but those are the big ones.

So yeah, if you expose your router to the internet, its gonna get pinged a lot by bots and someone might try to get in. Using a VPN is a very simple way to do this securely without exposing yourself and I’d suggest going that route unless you know what you’re doing.

permalink
report
reply

Fyi you will not be able to do live video transcoding with a raspberry pi. I overclocked my pi4’s CPU and GPU and it just can’t handle anything but direct play and maybe audio stream transcoding, though I’ve never had luck with any transcoding period. I either download a format I know can direct play or recently started using tdarr (server on pi, node running on my desktop when I need it) to transcode into a direct play format before it hits my Jellyfin library. Even just using my AMD Ryzen 5 (no GPU) it transcodes like 100x faster than a tdarr node given 2 of the rpi cpu cores. You could probably live transcode with a decent CPU (newer Intel CPUs are apparently very good at it) if you run Jellyfin on the NAS but then you’re at odds with your low power consumption goals. Otherwise rpi Jellyfin is great.

Good luck, I’d like to build a NAS myself at some point to replace or supplement my Synology.

permalink
report
reply

Got a refurbished APC coming in today. Looking forward to not having to worry about my NAS drives or losing internet because or a split second power blip.

permalink
report
reply

Switched to qbittorrent+gluetun side car recently and it’s been pretty good compared to the poorly maintained combo torrent+OpenVPN images I was using. Being able to update my torrent client image/config independent from the VPN client is great. Unfortunately most of the docs are Docker focused so it’s a bit of trial and error to get it setup in a non-docker environment like Kubernetes. Here’s my deployment in case it’s useful for anyone. Be careful that you configure qbittirrent to use “tun0” as it’s network interface or you will be exposed (got pinged by AT&T before I realized that one). I’m sure there’s a more robust way to makeuse of gluetun’s DNS over TLS and iptables kill switch that doesn’t require messing with qbittorrent config to secure, but that’s what I have so far and it works well enough for now.

permalink
report
reply

Look for refurbished units, you can get enterprise grade units for like half the retail price. I recently got a refurbished APC from refurbups.com. Comes with brand new batteries, mostly rack mountable stuff. Ended up being a little over half the price of a brand new one with shipping. Can’t tell at a glance if they ship to Canada, but if not I’d be surprised if there wasn’t a similar Canada based site you could find.

permalink
report
reply

I don’t see how star fleet allowed Data to remain onboard after that one. Being in the tech industry I often feel the Federation’s infosec is lacking in often trivial ways (unless the episode calls for better security of course 🙂), but maybe they have just accepted that sort of thing as the cost of doing space business since it happens all the time. So Data’s benefits out weigh his risk.

permalink
report
parent
reply

Knowing what and when to abstract can be hard to define precisely. Over abstraction has a cost. So does under abstraction. I have seen, writen and refactored terrible examples of both. Anecdotally, flattening an over abstracted hierarchy feels like less work and usually has better test coverage to validate correctness after refactoring than abstracting under abstracted code (spaghetti code, linear code, brand it how you will). Be aware of both extremes and try to find the balance.

permalink
report
reply