15 votes

Why every toot is also a potential denial of service attack

10 comments

  1. Rudism
    Link
    A few years ago (before Docker was on my radar) I tried to set up a self-hosted Mastodon instance for myself. The number of dependencies, components, moving parts, resources it required, and sheer...

    A few years ago (before Docker was on my radar) I tried to set up a self-hosted Mastodon instance for myself. The number of dependencies, components, moving parts, resources it required, and sheer complexity of its architecture completely put me off it. Not to say that it's bad or unnecessary--just that it was a lot more than I was expecting and not a fun little weekend project like I had expected.

    I know that using email as an example of a decentralized service is largely scoffed at,
    and not without good reason, but at the same time its relative simplicity is very appealing to me. So much so that a subsequent weekend project of mine was to whip up a proof-of-concept of what I imagined an email-inspired microblog-ish service might look like. It doesn't really scratch the same itch that Twitter and Mastodon do, but I don't think I ever had that itch to begin with.

    7 votes
  2. [5]
    mtset
    Link
    This is an interesting post, but I'd argue that this is more down to the design of Mastodon in particular than the Fediverse in general. In particular - and I'm working on a blog post about it...

    This is an interesting post, but I'd argue that this is more down to the design of Mastodon in particular than the Fediverse in general. In particular - and I'm working on a blog post about it right now - the default configuration of Sidekiq, the default lack of any front-side cacheing, and the default configuration of the Dockerized PostgreSQL that Mastodon ships, are just not well set up. Things like GoToSocial and Lemmy, which more efficient architectures in more efficient languages, handle these load surges far better, and a cacheing server (even just a well-configured nginx!) or CDN helps a huge amount as well.

    Also, Mastodon doesn't use pull-based federation, even though it's supported in the spec. That would help a lot.

    5 votes
    1. [2]
      cmccabe
      Link Parent
      Assuming you mean a blog post elsewhere and not a post on tildes.net, would you mind sharing the URL? I'm sure a lot of us would be interested in reading this and other posts from you.

      Assuming you mean a blog post elsewhere and not a post on tildes.net, would you mind sharing the URL? I'm sure a lot of us would be interested in reading this and other posts from you.

      3 votes
    2. [2]
      FlippantGod
      Link Parent
      I second this, the costs listed in this article are eyebrow-raisingly high. However, I can't deny that the overall architecture is complex, even outside of mastodon, particularly for an ecosystem...

      I second this, the costs listed in this article are eyebrow-raisingly high. However, I can't deny that the overall architecture is complex, even outside of mastodon, particularly for an ecosystem with the spirit of enabling individuals to platform themselves.

      RSS for instance is a markedly less (globally) efficient system than push notification, yet is extremely simple and robust.

      1 vote
      1. mtset
        Link Parent
        It's only a little less efficient, especially if both the client and server support etag correctly!

        RSS for instance is a markedly less (globally) efficient system

        It's only a little less efficient, especially if both the client and server support etag correctly!

        2 votes
  3. [2]
    vord
    Link
    I think a key problem with ActivityPub is that it isn't a decentralized protocol. The architecture is still a fairly centralized hub and spoke system, so the best scaling will come with relatively...

    I think a key problem with ActivityPub is that it isn't a decentralized protocol. The architecture is still a fairly centralized hub and spoke system, so the best scaling will come with relatively few very-large hub websites.

    A true decentralized social network is going to need to mimic various p2p systems. You're never going to avoid the 1 -> 1,000,000 broadcast problems unless you can enlist some of that 1,000,000 to rebroadcast.

    3 votes
    1. skybrian
      Link Parent
      Unless there is a true broadcast network protocol, distributing an update to a million users is going to require a million network messages no matter what you do. It seems like updates could be...

      Unless there is a true broadcast network protocol, distributing an update to a million users is going to require a million network messages no matter what you do. It seems like updates could be delayed and consolidated though, because those people aren't necessarily online when the update happens.

      Improved software and protocols should help, but it would also make sense for someone who is essentially running a high-traffic website to need to pay for infrastructure to handle the load. Mastodon and Twitter are similar in that signing up is free (or appears to be free) no matter how many subscribers you end up having; there are no limits.

      I've seen people tooting that a celebrity signing up on a shared Mastodon instance would be very rude. This seems like a rather minimal way of communicating expectations that probably won't work. There needs to be more communication before and during signup to people who are new to it and won't see whatever toots people are making.

      The "true distributed" dream doesn't make costs go away. It distributes costs to recipients. The cost of a torrent is borne by the various ISP's involved and you pay it as part of your Internet bill.

      I'm not sure that's the best way? Sending a message to a million people is valuable but people don't act like it is. It would make sense for the sender to bear some of the costs, to cut down on spam.

      But this would also be true of replies that go to the same large audience. There needs to be some pushback, so that people think before replying. Or maybe it would make sense for reply-all to be turned off for large accounts, similar to how email to large mailing lists should always be no-reply or go back only to the sender.

      On the other hand it seems like who should bear the cost of a message depends on what it is. If you're an advertiser then you should bear the costs of interrupting people. If you're the talent then the people getting the message should pay you (like we pay for books and movies). It seems like part of the controversy with charging blue-checks on Twitter is that it's unclear if they are the advertisers or the talent. Some of both, right? Value is subjective. Prices have to be negotiable, somehow.

      2 votes
  4. Greg
    (edited )
    Link
    The first thing that struck me here is that the costs seem surprisingly high compared to the load involved. I know that's not the core point of the article, and it's quite feasible for users to...

    The first thing that struck me here is that the costs seem surprisingly high compared to the load involved. I know that's not the core point of the article, and it's quite feasible for users to interact with many orders of magnitude more followers than the author, but it surprised me that 300k requests was considered anything out of the ordinary on a $20/month plan. That's burst usage, not sustained, and I'm having trouble seeing how it could cost more than a few cents in server time; even if he were interacting like this on a daily basis that leaves plenty of space for staff overhead and profit in that $20.

    Keeping the costs down doesn't fix the architectural question, but it does mean that I'd expect most people to end up paying single digit dollars per month - and that does matter, because if we want to repair the incentives around a whole lot of our technical platforms we need to learn to pay for what we're using. I'll note that Elon's $8 verification very much does not meet those criteria, because it appears to just be an additional money grab in addition to the user profiling and advertising that's already happening.

    In terms of the actual architecture, on the face of it it definitely seems odd to have gone for hub and spoke rather than a true peer to peer mesh design, but I can't help but wonder if that's a function of pragmatism? Yes, the idealistic geek in me wants everyone running their own instance, but we all know that's not how these things pan out in reality. If reducing the barrier to entry by reducing the overhead of creating an account on a large existing instance is the price we pay for openness, I'll cautiously accept that. Better the choice of several large providers on an open protocol that others can join, plus the escape hatch of starting your own if necessary as an extreme case rather than the default, compared to no choice at all if the project had never gained traction.

    I haven't dug too deep into mastodon's architecture, so I can't say how much efficiency is lost or gained as a result of encouraging semi-centralisation as it does now. I'd hope that there are ways to update the protocol to make it more p2p than it is currently and head off these issues before they start becoming significant, but for now it makes sense to me that the existence of instances like mastodon.social is a priority to drive enough adoption for the platform to be viable at all.

    3 votes
  5. skybrian
    Link
    From the article: There is a footnote explaining that this may be wrong in the details about how exactly many jobs were created. Still, this seems similar to when an email is inadvertently sent to...

    From the article:

    I’ll let my friend Hugo Gameiro, who runs masto.host and hosts my instance, explain what happened next:

    You just get a lot of engagement and that requires a ton of Sidekiq power to process.

    For example, let’s look at your birthday post … besides requiring thousands of Sidekiq jobs to spread your post through all their servers (you have 23k followers, let’s assume 3k different servers), as soon as you create the post 3k Sidekiq jobs are created. At your current plan you have 12 Sidekiq threads, so to process 3k jobs it will take a while because it can only deal with 12 at a time.

    Then, for each reply you receive to that post, 3K jobs are created, so your followers can see that reply without leaving their server or looking at your profile. Then you reply to the reply you got, another 3K jobs are created and so on.

    If you replied to the 100 replies you got on that post in 10 minutes (and assuming my 3K servers math is right). You created 300K jobs in Sidekiq. That’s why you get those queues.

    There is a footnote explaining that this may be wrong in the details about how exactly many jobs were created. Still, this seems similar to when an email is inadvertently sent to a large mailing list without using bcc: and every reply goes out to everyone again?

    Prior to the latest Twitter migration, I was paying around €280/year (or a little over €20/month) for my Mastodon instance on a custom plan I had with Hugo from the early days. This week, I upped that to a roughly €50/month plan. And that’s still not enough as my birthday post just showed so Hugo, kindly, has suggested he might have to come up with a custom plan for me.

    And yet, the problem is not one that will go away. We can only kick the ball down the road, as it were.

    (Unless I piss everyone off with this post, that is.)

    Thankfully, by running my own instance, the only person I’m burdening with this additional expense is me. But what if I’d been on a public instance run by someone else instead?

    1 vote