I work in tech and am constantly finding solutions to problems, often on other people’s tech blogs, that I think “I should write that down somewhere” and, well, I want to actually start doing that, but I don’t want to pay someone else to host it.
I have a Synology NAS, a sweet domain name, and familiarity with both Docker and Cloudflare tunnels. Would I be opening myself up to a world of hurt if I hosted a publicly available website on my NAS using [insert simple blogging platform], in a Docker container and behind some sort of Cloudflare protection?
In theory that’s enough levels of protection and isolation but I don’t know enough about it to not be paranoid about everything getting popped and providing access to the wider NAS as a whole.
Update: Thanks for the replies, everyone, they’ve been really helpful and somewhat reassuring. I think I’m going to have a look at Github and Cloudflare’s pages as my first port of call for my needs.
I’ll let folks with more security experience dive into your specific question, but another option is to host your website on something like Github pages (using a static website generator like Jekyll) and point Cloudflare at it. That way you don’t need anything pointed at your local network, get the uptime of Github, and still benefit from your own domain name.
That’s what I’m doing with my own blog and it’s been great. Github provides the service for free but if they ever charge for it I’ll just start hosting it locally.
Or take github out of the equation and directly use cloudflare pages. It has its own pros and cons, but for a simple static blog it’ll be more than enough, and takes out the CNAME hassle.
Speaking of Cloudflare, if you’re okay with not self hosting, then there’s Cloudflare Pages which is good for hosting static websites.
I know it’s not technically “self” hosted but I’d get a cheap yearly VPS somewhere and run a webserver off of that.For me its worth the peace of mind to keep my network a temple instead of a bus terminal. I paid $13 usd for the year for mine
I believe Oracle is still offering to slice off a bit of compute for free that should accomplish OP’s goal. I’ve used it to test a Jellyfin host among other things and for the price it can’t be beat!
A VPS makes sense insofar as keeping things thoroughly isolated from my own systems, but the overhead of maintaining a box that’s directly connected to the Internet like that isn’t something I’m keen on and I’m not convinced I’d have the expertise to do it right from the outset.
Change the ssh port to something with 4-5 digits, disable ssh password Auth and use certificates only, don’t expose any port other than ssh and 443.
If you’re paranoid, use cloudflare as a proxy and set the VPS firewall to only accept incoming traffic from cloudflares ip list.
That’s about it really.
Changing port is security by obscurity and it doesn’t take much time for botnets to scan all of IPV4 space on all ports. See for example the ever updated list that’s available on Shodan.
Disable password login and use certificates as you’ve suggested already, add fail2ban to block random drive-bys, and you’re off to the races.
The Oracle Cloud VPS only has SSH key authentication enabled by default. You can also set it to only allow SSH from your home IP in the virtual firewall before the machine is ever spun up.
Their current free ARM offering is 1 machine with 4-cores and 24gb RAM for life. You can also add another 2 AMD machines with 1-core and 1gb RAM and still be in their free-tier.
If you’re going to set it up and take advantage of the ARM machine, make sure you pick a home location for your account that has multiple availability zones. San Fran right now only has 1 zone, so if the shared ARM instances are all used up, you’ll have to wait a few days and try again. Phoenix I think has 3, so you can try with another zone right away.
The first worry are vectors around the Synology, It’s firmware, and network stack. Those devices are very closely scrutinized. Historically there have been many different vulnerabilities found and patched. Something like the log4j vulnerabilities back in the day where something just has to hit the logging system too hit you might open a hole in any of the other standard software packages there. And because the platform is so well known, once one vulnerability is found they already know what else exists by default and have plans for ways to attack it.
Vulnerabilities that COULD affect you in this case for few and far between but few and far between are how things happen.
The next concern you’re going to have are going to be someone slipping you a mickey in a container image. By and large it’s a bunch of good people maintaining the container images. They’re including packages from other good people. But this also means that there is a hell of a lot of cooks in the kitchen, and distribution, and upstream.
To be perfectly honest, with everything on auto update, cloud flares built-in protections for DDOS and attacks, and the nature of what you’re trying to host, you’re probably safe enough. There’s no three letter government agency or elite hacker group specifically after you. You’re far more likely to accidentally trip upon a zero day email image filter /pdf vulnerability and get bot netted as you are someone successfully attacking your Argo tunnel.
That said, it’s always better to host in someone else’s backyard than your own. If I were really, really stuck on hosting in my house on my network, I probably stand up a dedicated box, maybe something as small as a pi 0. I’d make sure that I had a really decent router / firewall and slip that hosting device into an isolated network that’s not allowed to reach out to anything else on my network.
Assume at all times that the box is toxic waste and that is an entry point into your network. Leave it isolated. No port forwards, you already have tunnels for that, don’t use it for DNS don’t use it for DHCP, Don’t allow You’re network users or devices to see ARP traffic from it.
Firewall drops everything between your home network and that box except SSH in, or maybe VNC in depending on your level of comfort.
Are you my brain? This exactly the sort of thing I think about when I say I’m paranoid about self-hosting! Alas, as much as I’d like to be able to add an extra box just for that level of isolation it’d probably take more of a time commitment than I have available to get it properly setup.
The attraction of docker containers, of course, is that they’re largely ready to go with sensible default settings out of the box, and maintenance is taken care of by somebody else.
Can i ask you to elaborate on this part
Assume at all times that the box is toxic waste and that is an entry point into your network. Leave it isolated. No port forwards, you already have tunnels for that, don’t use it for DNS don’t use it for DHCP, Don’t allow You’re network users or devices to see ARP traffic from it.
I used to have a separate box, but the only thing it did was port forwarding
Specifically i don’t really understand the topology of this setup, and how do i set it up
Cloudflare tunnel is a thin client that runs on your machine to Cloudflare; when there’s a request from outside to Cloudflare, it relays it via the established tunnel to the machine. As such, your machine only need outbound internet access (to Cloudflare servers) and no need for inbound access (I.e. port forwarding).
Thank you for your reply, but i actually was asking about the network stuff 😅
I used to use cloudflare tunnels for many years, now i’m a bit too tin foiled to use any cloudflare 😅
Cloudflare tunnels are layer 7, so it’s not unlimited access by any means. This also means that certain things will break btw, for example if your website uses websockets to load information, that isn’t supported.
Next, I’d put the computer that is going to be hosting into an isolated vlan of its own and access via external URL only.
If you’re going to use docker images, make sure to vet that they’re updated often and always spin up the latest.
CF tunnels are layer 3, not 7 and they have support for web sockets. It’s basically wireguard VPN with a few extras built on top.
https://developers.cloudflare.com/cloudflare-one/faq/cloudflare-tunnels-faq/
That document doesn’t say what layer. But it does say it supports Websockets.
Just odd that when I try to set it up using a named tunnel I don’t get an option to specify the WS service type. However it does require a service type if you want to connect to it.
Looking at this page it would seem that it’s a layer 7. Although I could be wrong, but my front end app has issues finding my backend service for websockets.
Granted I even tried to connect to my private computer using other protocols. I couldn’t get through. Anyway I’m most likely going to be taking that project offline soon.
No, but I thought I clarified that when I said it’s basically wireguard VPN which operates using tcp/udp (layer 3.) layer 7 is stuff like https. CF tunnels are lower level.
Page you linked is missing the layer between CF and source server so it doesn’t indicate layer. You can lookup wireguard protocol if you want more details.
You’ll be fine enough as long as you enable MFA on your Nas, and ideally configure it so that anything “fun”, like administrative controls or remote access, are only available on the local network.
Synology has sensible defaults for security, for the most part. Make sure you have automated updates enabled, even for minor updates, and ensure it’s configured to block multiple failed login attempts.
You’re probably not going to get hackerman poking at your stuff, but you’ll get bots trying to ssh in, and login to the WordPress admin console, even if you’re not using WordPress.
A good rule of thumb for securing computers is to minimize access/privilege/connectivity.
Lock everything down as far as you can, turn off everything that makes it possible to access it, and enable every tool for keeping people out or dissuading attackers.
Now you can enable port 443 on your Nas to be publicly available, and only that port because you don’t need anything else.
You can enable your router to forward only port 443 to your Nas.
It feels silly to say, but sometimes people think “my firewall is getting in the way, I’ll turn it off”, or “this one user needs read access to one file, so I’ll give read/write/execute privileges to every user in the system to this folder and every subfolder”.
So as long as you’re basically sensible and use the tools available, you should be fine.
You’ll still poop a little the first time you see that 800 bots tried to break in. Just remember that they’re doing that now, there’s just nothing listening to write down that they tried.
However, the person who suggested putting cloudflare in front of GitHub pages and using something like Hugo is a great example of “opening as few holes as possible”, and “using the tools available”.
It’s what I do for my static sites, like my recipes and stuff.
You can get a GitHub action configured that’ll compile the site and deploy it whenever a commit happens, which is nice.