At the time of writing, Lemmyworld has the second highest number of active users (compared to all lemmy instances)
Also at the time of writing, Lemmyworld has >99% uptime.
By comparison, other lemmy instances with as many users as Lemmyworld keep going down.
What optimizations has Lemmyworld made to their hosting configuration that has made it more resilient than other instances’ hosting configurations?
See also Does Lemmy cache the frontpage by default (read-only)? on !lemmy_support@lemmy.ml
Yeah, that’s exactly why I’m asking this question. All the effort seems to be going into the DB – but you can have a horribly shitty DB and backend but still have a massively performant webserver by just caching away the reads to RAM.
I didn’t see any tickets about this on the GitHub, which is why I’m asking around to see if there’s actually some very low-hanging-fruit for improving all the instances with a frontend RAM cache.
Yeah, that’s exactly why I’m asking this question. All the effort seems to be going into the DB – but you can have a horribly shitty DB and backend but still have a massively performant webserver by just caching away the reads to RAM.
Much of your post seemed to focus on the techniques employed by lemmy.world
, caching websocket responses in the web-proxy does not seem to prominently feature among those techniques.
If you’re interested in advancing the state of the discussion around web-proxy caching, I’d consider standing up an instance to experiment with it and report your own findings. You wouldn’t necessarily have to take on the ongoing expense and moderation headache of a public instance, you could set up with new user registrations closed, create your own test users, and write a small load generator powered by https://join-lemmy.org/api/ to investigate the effect of caching common API queries.