Bluesky April 2026 Outage Post-Mortem

(pckt.blog)

95 points | by jcalabro 4 hours ago

13 comments

  • threecheese 3 hours ago
    > What I had missed is that we deployed a new internal service last week that sent less than three GetPostRecord requests per second, but it did sometimes send batches of 15-20 thousand URIs at a time. Typically, we'd probably be doing between 1-50 post lookups per request.

    That’ll do it.

    • 98codes 3 hours ago
      Ahh, the three relevant numbers in development: 0, 1, and infinity.
    • jandrese 1 hour ago
      The incredible part about this is because their backend is all TCP/IP they were literally exhausting the ports by leaving all 65k of them in TIME_WAIT, and the workaround was to start randomizing the localhost address to give them another trillion ports or so.
    • bombcar 3 hours ago
      Zero, one, many, many thousands.
    • LoganDark 55 minutes ago
      And then they fix the issue by using multiple localhost IPs rather than, perhaps, not sending 15-20 thousand URIs at a time
      • odo1242 41 minutes ago
        They mentioned it was a temporary fix that they removed after finding and fixing the true root cause, though.
    • htx80nerd 1 hour ago
      less than ideal if I had to be frank.
  • tapoxi 1 hour ago
    I don't really understand this architecture, but I thought Bluesky was distributed like Mastodon? How can it have an outage?
    • pfraze 1 hour ago
      This writeup is useful for backend engineers: https://atproto.com/articles/atproto-for-distsys-engineers

      The simple answer is that atproto works like the web & search engines, where the apps aggregate from the distributed accounts. So the proper analogy here would be like yahoo going down in 1999.

      • tapoxi 1 hour ago
        This is a fantastic write-up, thanks for sharing!
      • isodev 1 hour ago
        Google and MSN Search were already available at this time. Also websites used to publish webrings and there was IRC and forums to ask people about things.
    • isodev 1 hour ago
      It’s more of a concept of a plan for being distributed. I even went through the trouble of hosting my own PDC and still, I was unable to use the service during the outage
    • Retr0id 1 hour ago
      Mastodon infra can have outages, too.
      • tapoxi 1 hour ago
        It's just confined to one instance if it goes down, not all of Mastodon.
    • LoganDark 54 minutes ago
      A web interface and home server can have an outage. Bluesky is just a web interface and home server.
  • mwkaufma 1 hour ago
    Tell us more about this buggy "new internal service" that's scraping batch data :P
  • goekjclo 2 hours ago
    > The timing of these log spikes lined up with drops in user-facing traffic, which makes sense. Our data plane heavily uses memcached to keep load off our main Scylla database, and if we're exhausting ports, that's a huge problem.

    I expect this is common.

  • drewg123 56 minutes ago
    Golang's use of a potentially unbounded number of threads is just insane. I used to be fairly bullish on golang, but this, combined with the fact that its garbage collected, makes me feel its just unsuitable for production use.
    • floating-io 20 minutes ago
      You can have this problem with any kind of thread -- including OS threads -- if you do an unbounded spawn loop. Go is hardly unique in this.

      Goroutines are actually better AFAIK because they distribute work on a thread pool that can be much smaller than the number of active goroutines.

      If my quick skim created a correct understanding, then the problem here looks more like architecture. Put simply: does the memcached client really require a new TCP connection for every lookup? I would think you would pool those connections just like you would a typical database and keep them around for approximately forever. Then they wouldn't have spammed memcache with so many connections in the first place...

      (edit: ah, it looks like they do use a pool, but perhaps the pool does not have a bounded upper size, which is its own kind of fail.)

    • tombert 24 minutes ago
      Why does garbage collection make it unsuitable for production use? A lot of production software is written in garbage collected languages like Java. Pretty much the entire backend for iTunes/Apple Music is written in Java, and it's not doing any kind of fancy bump allocator tricks to avoid garbage. In my mind, kind of hard to argue that Apple Music is not "production use".

      There are certainly plenty of projects where garbage collection is too slow, but I don't know that they're the majority, and more people would likely prefer memory safety by default.

  • pembrook 46 minutes ago
    Distributed social media goes down? hrmmm.

    Email and the internet don't have "downtime." Certain key infra providers do of course. ISPs can go down. DNS providers can go down. But the internet and email itself can't go down absent a global electricity outage.

    You haven't built a decentralized network until you reach that standard imo. Otherwise its just "distributed protocol" cosplay. Nice costume. Kind of like how everybody has been amnesia'd into thinking Obsidian is open source when it really isn't.

    • iAMkenough 44 minutes ago
      Bluesky is a provider. Blacksky didn’t go down.
      • pembrook 34 minutes ago
        Is there anything running on Blacksky other than Bluesky with more than say, 100 active users?

        AOL never even got to that level of dominance in the internet 1.0 era.

        The point is it's not a distributed network if one node is 99.9% of all traffic.

  • gsibble 1 hour ago
    Did all 3 users notice?
    • ffsm8 50 minutes ago
      Naw, only one did. Turns out the other two were his socket accounts he used to upvote and comment on his own content.

      Okay, nuff trolling for today

    • dogemaster2027 47 minutes ago
      [dead]
  • electrondood 1 hour ago
    Great write up... curious about the RCA. Thanks!
  • rvz 2 hours ago
    Thank you for the post mortem on this outage.
  • jonstaab 1 hour ago
    nostr never goes down
    • jandrese 1 hour ago
      If nostr went down would people even notice?
      • nout 31 minutes ago
        If any major nostr relay goes down, no one notices. That has happened many times, the network is very resilient to that.
      • jonstaab 1 hour ago
        probably not
    • pfraze 1 hour ago
      All support to other decentralizers but nothing never goes down.
      • nout 32 minutes ago
        The comparison here is to something like TCP/IP. TCP/IP never goes down. TCP/IP is a protocol, the servers may go down and cause disruption, but the protocol doesn't really have the ability to "go down". Nostr is also a protocol. The communication on top of Nostr is pretty resilient compared to other solutions though, so that's the main highlight here.

        If tens of servers go down, then some people may start noticing a bit of inconvenience. If hundreds of servers go down, then some people may need to coordinate out of bound on what relays to use, but it still generally speaking works ok.

      • jonstaab 1 hour ago
        1000x redundancy makes it vanishingly unlikely. Although I know we're due for a pole shift so all bets are off I suppose.
  • dogemaster2027 1 hour ago
    [dead]
  • templar_snow 2 hours ago
    [flagged]
  • jmclnx 2 hours ago
    Lite Blue on a dark Blue background. That is a new one, I have seen grey text on lite grey, but blue on blue ?

    The article does work in lynx, at least I can read it.