I’m interested in possibly hosting my own Lemmy instance - just for my own account. I was thinking of hosting it on Raspberry Pi (possibly the 1GB Pi 4 B), but I couldn’t find much for definitive information on what the hardware requirements would be for such an instance to know if this is even possible. How much storage is required? Is the Pi 4 CPU powerful enough? How much memory?
I’m not sure if it’s still valid, but Oracle Cloud Infrastructure (OCI) had a 4 vCPU, 24 GB RAM, 200 GB HDD free tier. No costs, ever! You could sign up there and setup an even bigger instance.
Ever? That sounds too good to be true. Especially with that much RAM.
-edit-
Oh wow, I see their always free tier and it’s true. Impressive!
Their free tier is prone to being shut down without warning, though.
It kinda is too good to be true.
I’ve got a Vultr VPS with 1GB of ram with just Lemmy and a reverse proxy. RAM usage is right below 50% most of the time. I never looked when I first spun it up, so no idea how quickly usage is going up, or if it is going up at all.
If you, as the sole user, are not subscribing to dozens if not hundreds of communities, 1GB should be barely okay. As others have pointed out, it is storage that requires more attention with a Pi 4B.
it is storage that requires more attention
Please correct me if I am wrong, but this feels like a flaw with how Lemmy (perhaps other fediverse apps as well, I’m not sure) is designed. Why do I need to store all posts made to a community that one of the users on my instance subscribes to? Would it not be better to simply store my user’s posts, and comments, and the posts made to any communities hosted on my instance? Why do I need to store information from other instances, and users?
It’s caching posts from other servers so that if you have an instance with a few hundred or thousand people on it and they all open the home page you don’t send out thousands of requests for each post and end up DDOSing a bunch of other servers.
I don’t really understand this reasoning. Some server would still need to receive those requests at some point. Would it not be better if those requests were distributed, rather than pounded onto one server? If you have a server caching all the content for its users, then all of its users are sending all of those requests for content to that one single server. If users fetched content from their source servers, then the load would be distributed. The only real difference that I can think of is that the speed of post retreival. Even then, though, that could be flawed, as perhaps the source server is faster than one’s host server.
I was thinking of doing the same just with an lxc (Proxmox). Your Pi should be enough based on a few comments. I’d give it a try
I think you will probably need more power than that. Would it be possible for you to host it on your desktop PC in a VM?