Are there any Discord servers or somewhere in the Matrix to chat about hosting a Lemmy instance? I’ve got Lemmy running, but I think there are several of us in the same boat struggling with federation performance issues and it might be good to have some place to chat real time.
I think it is less about pointing fingers as to who’s at blame, and trying to see if there are things we can do to resolve/alleviate that.
I recall reading somewhere that @Ruud@lemmy.world mentioned before that the server is scaled all the way up to a fairly beefy dedicated server already, perhaps it is soon time to scale this service horizontally across multiple servers. If nothing else, I think a lot of value could be gained by moving the database to a separate server than the UI/backend server as a first step, which shouldn’t take too much effort (other than recurring $ and a bit of admin time) even with the current Lemmy code base/deployment workflow…
Well- I do know- most of the components do scale.
The UI/Frontend, for example, you can run multiple instances easily.
The API/MiddleTier, I don’t know if it supports horizontal scaling though. But, a beefy server can push a TON of traffic.
The database/backend, being postgres, does support some horizontal scaling.
Regarding the app itself, it scales much better if EVERYONE didn’t just flock to lemmy.ml, lemmy.world, and beehaw.org. I think that is one of the huge issues… everyone wanted to join the “big” instance.
If you look here: https://lemmy.world/comment/65982
At least specs and capacity wise, it doesn’t suggest it is hitting a wall.
The more I dug into things, the more I think the limitation comes from an age old issue in that if your service is expected to connect to a lot of flakey destinations, you’re not going to be in for a good time. I think the big instance backend is trying to send federation event messages, and a bunch of smaller federated destinations have shuttered (because they’re not getting all the messages, so they just go and sign up on the big instances to see everything), which results in the big instances’ out going connection have to wait for timeout and/or discover the recipient is no longer available, which results in a backed up queue of messages to send out.
When I posted a reply to myself on lemmy.world, it took 17 seconds to reach my instance (hosted in a data centre w/ sub 200ms ping to lemmy.world itself, so not a network latency issue here), which exceeds the 10 seconds limit per defined by Lemmy. Increasing it on the application protocol level won’t help, because as more small instances come up, they too would also like to subscribe to big hubs, which will just further exacerbate the lag.
I think the current implementation is very naive and can scale a bit, but will likely be insufficient as the fediverse grows, not as the individual instance’s user grows. That is, the bottle neck will not so much be “this can support instance up to 100K users” but rather “now that there’s 100K users, we’d also have 50K servers trying to federate with us”. And to work around that, you’re going to need a lot more than Postgres horizontal scaling… you’d need message buses and workers that can ensure jobs (i.e.: outward federation) can be sent effectively.
I agree here. I don’t see Federation scaling without major arch changes. I can’t see a server making 50k (subscribed servers) outbound connections for every upvote, comment, etc.
Q: How many Federated actions, on average per user per community per day? Probably a low number, say 5. But 5 * Users * Servers is a huge number of connections once Users and Servers get moderately large. 500k users and 5k servers is 12.5 billion connections, just for one community.
That is a VERY small server…
MY server, has 32 cores, 64 threads, 256G of ram, and 130T of storage (4T of which is NVMe)
Sheesh, that is prob why that instance is dragging!!
https://lemmy.world/post/56228
They’ve bumped the server much more than the original posted VM. I was pointing to the zabbix charts and actual usage. Notice CPU is sub 20%, and the network usage being sub 200Mbits. There’s plenty of headroom.
I found the newest link- https://lemmy.world/comment/379405
Ok, that is a pretty sizable chunk of hardware.
I care less about what it is running on, but what is consumed. At sub 20% usage, it really doesn’t matter what the hardware is, because the overall spec is not the bottle neck.
Your original link is from 9 days ago, before the massive surge hit.
https://lemmy.world/post/56228 Came 8 days ago, with reports of it being pretty well saturated.
Remember- the big surge, is in the last 3-4 days.
Fediverse stats: https://fediverse.observer/dailystats
In the last 4 days, they have went up over 400% in size.
I don’t know if you’re totally missing it… here is the CPU usage from 3 hours ago: https://lemmy.world/comment/377946
Even if you 4x the usage from the alleged 400% growth, the spec of the server itself is not the bottleneck. They’ve also significantly increased the federation workers to 10000 based on my private chat… so something is not scaling to the fullest potential.
I think the point of focus should be more on why it is not using all the resources available, rather than ‘that server is weak’. We’re about to see a much larger influx up comes July 1st that’s going to make the 400% growth look like a joke, and if current larger instances aren’t able to handle the federation now, the current smaller instances will buck hard up comes the big move.