Avatar

Accomplished-Lack721B

Accomplished-Lack721@alien.top
Joined
1 posts • 15 comments
Direct message

Only expose applications to the Internet if you have a good need to. Otherwise, use a VPN to access your home network and get to your applications that way.

If you are exposing them to the internet, take precautions. Use a reverse proxy. Use 2FA if the app supports it. Always use good, long passwords. Login as a limited user whenever possible, and disable admin users for services whenever possible. Consider an alternative solution for authentication, like Authentik. Consider using Fail2ban or Crowdsec to help mitigate the risks of brute force attacks or attacks by known bad actors. Consider the use of Cloudflare tunnels (there are plusses and minuses) to help mitigate the risk of DDOS attacks or to implement other security enhancements that can sit in front of the service.

What might be a good reason for exposing an application to the Internet? Perhaps you want to make it available to multiple people who you don’t expect to all install VPN clients. Perhaps you want to use it from devices where you can’t install one yourself, like a work desktop. This is why my Nextcloud and Calibre Web installs, plus an instance of Immich I’m test-driving, are reachable online.

But if the application only needs to be accessed by you, with devices you control, use a VPN. There are a number of ways to do this. I run a Wireguard server directly on my router, and it only took a few clicks to enable and configure in tandem with the router company’s DDNS service. Tailscale makes VPN setup very easy with minimal setup as well. My NAS administration has no reason to be accessible over the internet. Neither does my Portainer instance. Or any device on my network I might want to SSH into. For all of that, I connect with the VPN first, and then connect to the service.

permalink
report
reply

The login page to your NAS.

permalink
report
reply

I’ve been playing with Joplin. The one thing I can’t decide if I like – there’s no web interface. You have to be using the app (versions are available for Mac, Windows, Linux and mobile devices).

There’s also Notes in Nextcloud.

permalink
report
reply

I installed it because I was curious, and still learning some things about Docker.

I pretty quickly used it to install portrainer, and I’ve since managed everything from there.

The file manager is moderately handy, but nothing I couldn’t do either with the command line or another filemanager tool I’d install through Docker itself.

I still have it set up because I have no need to change it, but I wouldn’t use it if I were doing my setup from scratch.

I’m kind of curious about Cosmos as what seems like a more comprehensive alternative, but I’m pretty happy with how I have some of its other functions (like reverse proxy) set up now, so if I try it, it’ll probably just be to tinker.

permalink
report
reply

Thanks for the heads up on this project. It looks like it might work very well for some people who basically want a web app as a view right into a filestystem for dealing with folders.

Unfortunately, it doesn’t really meet the needs I’m laying out. The use case I’m describing is still one where the web app abstracts away the file system and uses albums. It just lays out a (smart, I think) way of recognizing and interpreting the organization in a pre-existing library, like one created from a Google Photos takeout, when bringing photos into its own system – accounting for duplicates in albums without doubling them up on disks.

Direct editing of EXIF is handy. Memories does that too, and it’s part of why it’s what I’m using. But my ideal situation would be one where the app only writes metadata changes to its own database initially, but then (optionally) applies it to EXIF when exporting/downloading files without touching the original files. And it would also give the user an option to apply metadata to EXIF for the original files, but only after first prompting with warnings.

It seems your design goals are pretty different than any of that – which isn’t a criticism, as I’m sure it works well for the way a lot of people like to work (just not me).

permalink
report
parent
reply

Syncthing would work well for just syncing.

So would Seafile, for a more Google Drive-like experience.

So would NextCloud, for more of an extensible Google Drive+Apps experience that you can customize to your needs.

permalink
report
reply

It can handle almost any service you might care to self-host - and with that much RAM, several at a time. You could run multiple VMs and still have breathing room.

But a much less powerful box can also handle most self-hosted services well. If your existing Pi is doing the job, I wouldn’t switch. The 9900K will consume way more power, which is bad for the environment and your wallet.

Maybe make it into a testing station. Or donate it to a nonprofit. Or sell it. Or turn it into a living room gaming station, playing light games natively and streaming AAA games from another machine with Steam Link or Moonlight (in sleep mode when it’s not in use?). Or give it to a family member. Or make it available to a neighbor via Freecycle/Buy Nothing/similar gifting networks.

permalink
report
reply

Only give the container access to the folders it needs for your application to operate as intended.

Only give the container access to the networks it needs for the application to run as intended.

Don’t run containers as root unless absolutely necessary.

Don’t expose an application to the Internet unless necessary. If you’re the only one accessing it remotely, or if you can manage any of the other devices that might (say, for family members), access your home network via a VPN. There are multiple ways to do this. I run a VPN server on my router. Tailscale is a good user-friendly option.

If you do need to expose an application to the Internet, don’t do so directly. Use a reverse proxy. One common setup: Put your containers on private networks (shared among multiple only in cases where they need to speak to each other), with ports forwarded from the containers to the host. Install a reverse proxy like Nginx Proxy Manager. Forward 80 and 443 from the router to NGM, but don’t forward anything else from the router. Register a domain, with subdomains for each service you use. Point the domain and subdomains to your IP, or using aliases, to a dynamic dns domain that connects to a service on your network (in my case, I use my Asus router’s DDNS service). Have NGM connect each subdomain to the appropriate port on the host (ie, nc.example.com going to a port on the hose being used for NextCloud). Have NGM handle SSL certificate requests and renewals.

There are other options that don’t involve any open ports, like Cloudflare tunnels. There are also other good reverse proxy options.

Consider using something like fail2ban or crowdsec to mitigate brute force attacks and ban bad actors. Consider something like Authentik for an extra layer of authentication. If you use Cloudflare, consider its DDOS protection and other security enhancements.

Keep good and frequent backups.

Don’t use the same password for multiple services, whether they’re ones you run or elsewhere.

Throw salt over your shoulder, say three Hail Marys and cross your fingers.

permalink
report
reply

If you’re happy with those services … maybe you shouldn’t?

I self-host because I prefer to house my data locally when possible. It’s easier for backups and I’m not subject to the whims and financial decisions made by a company about whether their service will remain available, what it will cost, what functions it will offer. The tradeoff is work on my part, but I enjoy tinkering and learning.

In my case, I self-host a NextCloud instance for remote access to my docs, a Calibre Web server for eBooks (and to share those with a few trusted friends), a Vaultwarden instance because I’d prefer my vaults not be stored by a company whose servers are likely a major target for bad actors and that could change its TOS or offerings in the future.

permalink
report
reply

Sorry, but this sounds a bit: “I’d like to eat this piece of cake, but also still have it available to me when I’m done.”

There are front-ends that can make docker apps easier to manage, like CasaOS. The tradeoff for ease of use is flexibility compared to something like Portainer or the CLI. CasaOS’s app library (for instance) frequently has out-of-date versions of apps, and if their default configuration doesn’t make sense for your purposes, you’re still going to have it delve deeper (whether in the CasaOS UI or another tool) to customize things to your needs.

That’s pretty much a given with any tool - if you don’t want to deal with how it works, then you need to accept the default configuration and cross your fingers that it works for your purposes.

And you’re still not going to get away from the fundamentals of how docker works, if you find them troublesome for some reason. Updating a docker app with something like CasaOS is doing the same thing it would be with Portainer or the command line. I’m not quite sure what seems “wrong” about it to you, but it would be “wrong” in the same way no matter what front end you use.

permalink
report
reply