GitLab isn't much better. The releases ignore serious bugs, but they have unlimited budget to make stupid UI tweaks that offer zero real world improvement.
Worse, self-hosted version broke one of the updates by botching a migration and giving no error about it. Installation broke in mysterious and subtle ways, causing us to scratch heads for days.
The next update warned us about the problem, so we ran the repair commands to put things in order again. This is a very small server with ~10 users and ~50 repos at most.
Nothing is pissing me off more than GitHub's stability going down the tubes RIGHT as work is migrating everything, and I mean everything, from CircleCI to GH.
The wildest thing is that Azure Repos/Pipelines was better than this.
Their one caveat is also that they are still migrating it to Azure infra, so it's possible that's still in a one foot in one foot out kinda scenario, from what I've heard. But, this isn't inspiring confidence.
Could be. But 99% of the repos are static garbage with no PR nor actions.
They mentioned they have some elasticsearch reindexing going to, I would guess they needed to regard or move stuff and something didn't work well. But if I understood it right they mentioned the PRs ES index which they didn't shared proof increased as the number of repos.
It might be anything. It seems they lost huge chunks due to layoffs and structural changes and MS which has the reverse golden Midas touch.
This is just pure speculation but also now there is no reason for MS to keep GH working. They absorbed all code they wanted. Now they can let it burn. Would be even better for them if that happened
> Could be. But 99% of the repos are static garbage with no PR nor actions.
But the 1% of repos that do have PRs and actions are likely going to be seeing enormous increases in volumes
I have been a part of two very large companies with self hosted gits and I've seen enough to be confident that this is an incredibly hard thing to manage
Ya but they are owned by freaking microsoft and have billions of dollars and employees to throw at the problem. The outage problems shouldn't be happening period.
Easy to say that! Some problems are legitimately hard to solve though. Github is likely seeing usage patterns that have never been seen before and I bet some of these failure modes are novel
If you are at the limits of your architecture you may need to re-write things, and if you are rewriting things you can not arbitrarily speed that up by throwing dollars at it.
At that point, make it lazy indexing? Who cares that I can't find a repo that was made 10 seconds ago, or even 15 minutes ago? No seriously, who cares? Search to that level of nuance is not mission critical, I don't care what anyone says, you'll live if you wait another 15 minutes or even an hour. Their search has been terrible since their last major set of search changes where they overhauled it completely either way.
Serious question, have you been part of an org that had to scale orders of magnitude very quickly?
Anyone who has been part of that journey knows how painful it really is. A lot of times the systems to fail at all levels, and you have to redesign it from the first principles.
> Serious question, have you been part of an org that had to scale orders of magnitude very quickly?
I have, but it depends what you mean.
Scenario 1: e-commerce SaaS (think: Amazon but whitelabel, and before CPUs even had AES instructions); Christmas was "fun".
Scenario 2: Video Games. The first day is the worst day when it comes to scale. Everything has to be flawless from day 0 and you get no warning as to what can go wrong.
Yet, somehow, I managed to make highly reliable systems.
In scenario 1; I had an existing system that had to scale up and down with load, this was before there was cloud and hardware had a 3-4 month lead time, so most of the effort was around optimising existing code, increasing job timeouts and "quenching" sources that were expensive. We used to also do so 'magic' when it came to serving requests that had session token or shopping cart cookie.
In scenario 2; we have a clean-room implementation and no legacy, which is a blessing but also a curse, there's no possibility to sample real usage: but you also don't need to worry about making breaking changes that are for the better. With legacy you have to figure out how to migrate to the new behaviour gradually.
So, pro's and con's... but it's not like handling huge load hasn't been done before, computers are faster than they ever have been and while my personal opinion is that operational knowledge is dying (due to general distain for people who actually used to run systems that scale: not just write hopeful "eventually consistent" yaml that they call deterministic) - the systems that do exist today hold your hand much better than they did for me 20 years ago.
And I ran 1% of web traffic with an ops team of 5 back then. So, idk what's going on here.
EDIT: Likely people are flagging me because I sound arrogant (or I hurt their feelings by talking bad about YAML-ops), but all I am doing is answering the question presented based on my experience.
I think you meant "green fields" and not "clean room"? Clean room refers to reverse engineering an existing program to create specifications, then having another team implement the specifications without legal risk from involving the original.
They say it is at least one order of magnitude[1]; "our plan to increase GitHub’s capacity by 10X in October 2025 .. By February 2026, it was clear that we needed to design for a future that requires 30X today’s scale."
I wouldn't be surprised. Have you not noticed the sheer volume of slop being posted everywhere these days? Almost all of that is hosted on Github. And some of those repos have insane commit frequencies.
They can claim that...but if you've built a public SaaS before you know the job is not to host the software, it's to put rails around people taking it down. They've had since 2008 to build those rails, and they're just now hitting places that take the service down on the regular?
Two weeks ago I had a commission to explore migrating from selfhosted gitlab to github for better AI integration. Last night that project was cancelled due to github outages and we're going to upgrade the self hosted server instead. I'd be tempted to use something like forgejo but there are a dozen devs and honestly I've only ever used it solo.
I didn't challenge them on this but it's because of Claude integrations with github. I'm not sure what that gives them over just running it against the codebase, but I didn't want to lose the opportunity to finally move them from that EoL server
We ended up an Azure Pipelines kinda by default because it was there and mostly paid for with the intention of later migrating, but it's been fine. Boring but stable and functional.
Azure repos are kinda fine. It's really basic and there is nothing to break. I actually really really like their ticketing thingy for the same reason. It has the necessary stuff and the management types can't add a million of fields to it and annoy me with reporting, burndown charts or what not.
It has an annoying bug where approving PR's from the cli won't delete branches when you squash commit, while clicking the button in the UI does it perfectly fine. It's been a bug for a while (as in several years), and if you find something like that, don't expect it to ever be fixed. As a whole it's not a bad tool though.
As you say it's limited, but that can be both good and bad.
I'm on the other side of the fence. We're just about done migrating from GitHub to GitLab (self-hosted) and it's been refreshing to DGAF about any of the GH outages I read about.
Similar boat myself too, finished moving all important stuff from GitHub to self-hosted Forgejo with cross-platform builds. Not only do I avoid all the downtime stuff, but E2E builds also takes ~20% of the completion time it used to take, since now my runners have dedicated hardware hosted at home.
To maintain a fair comparison, GitHub has supported self-hosted runners for several years (maybe that doesn’t work for your specific usage, for whatever reason).
> To maintain a fair comparison, GitHub has supported self-hosted runners for several years
Yeah, tried that first, as I didn't want to move to Forgejo, I just wanted to keep working when I wanted to work.
The GitHub runner on Linux seemed fine, but the ones for macOS and Windows seemingly did something that made them a hell lot slower than even running VMs and then executing stuff inside those. I'm not sure what the runner is doing, if there is some built-in sandboxing or what not for those platforms, but it wasn't feasible to rely on for me as the builds took way too long time.
We were on self-hosted Gitlab but after a merger were forced to Github. Navigation feels painful in comparison and basic features such as commit graph are now behind more expensive tiers.
Interesting! I worked with Gitlab and I also thought it was quite clunky. If it was not for the stability issues GitHub is fine. Any other alternatives to GH or GL?
Why do you care about github? It’s Just another corporation doing what they know best: harvesting money. The software ecosystem can live without github just fine
I think you’ve got it backwards. GitHub is by far the market leader for hosted repositories and maybe for CI too. This is like asking “Why are companies interested in using AWS?”
When one firm is so dominant for so long, the question is more like “Why shouldn’t we just use GitHub like 80% of software companies do?”
The issues they’ve had are almost all very recent. Very few companies have reevaluated that decision, because moving a big and well-integrated part of infrastructure is a huge project that delivers no value to the business. Speculating that you’ll have fewer development-slowing outages is not the most convincing when asking for the budget to do this. Plus, self-hosted isn’t necessarily going to have better uptime - mistakes happen.
I think before Actions, it would have been a lot easier to migrate off GH though. You’d just need to change a lot of repo URLs and find a way to set up webhooks from the new place to poke CI. Now with Actions, a lot lives in GH and in a proprietary flavor that doesn’t just ‘lift and shift.’
More than half the companies I have worked for use Github. The others used Atlassian tools which were at least as bad from a reliability perspective and much less nice to use (IMO).
> The issues they’ve had are almost all very recent.
It has been bad for at least 18mo, maybe longer? I recall multiple work impacting outages at my previous employer extending back into 2024. Maybe even earlier than that?
> Honest question, why are companies interested in hosting on github?
Mostly boils down to marketing and easier to establish a community. Almost every developer has an account there, leading to network effects being much larger, so if you're a new FOSS project, finding contributors and getting your project in front of other's eyes is much easier when you're on GitHub compared to your own Forgejo instance.
With that said, I'd question if chasing "most external one-time contributors" or GitHub stars is the right way to actually run a FOSS project, personally I'd avoid thinking about those vanity-numbers as much as possible and focus on the project, code and contributors themselves.
But, I've literally heard those two arguments for "why GitHub" countless of times over the years.
Go with the flow, don't rock the boat and use what developers already know, are probably the most cited reasons I've heard.
I've tried so many times in the past to argue for self-hosted setup that you fully control if you can afford it, things just get so much smoother and if you're a software development company, you probably want to own the software development workflow E2E so you can actually ship as fast as you want.
I’ve argued the opposite most of the time in build vs buy. Buy in almost every case unless it’s a real competitive advantage to you.
I know developers love to build, but do you think:
1) self-hosting git provides any competitive edge to the business over letting someone manage it?
2) it provides so much value that you’re willing to fund engineers to build, secure, support this on an ongoing basis?
I’ve found the answer to those is No in both cases.
The same reason you wouldn’t build your own internal chat tool, you’d use Slack. And you wouldn’t bother self-hosting your own Jira or documentation.
Code hosting is code hosting, there’s no difference where it's hosted. There’s no slowdown in delivery with using GitHub - their March uptime was 99.5% which annoys some commenters but it’s fine. That’s 45 minutes downtime per month which is tolerable.
You would spend way more effort and money building a jenky self-hosted solution to end up with a worse result.
Usually, at large enough corporations, it's one of two things. Some random project gets open sourced, and it ends up on Github(see, for example, Salesforce) - or, more commonly, some subsidiary or acquisition had github and has either refused to migrate to the internal source system or the hassle of migration isn't worth it.
I don’t know why you would even really need hosted git or why you’d be affected by its downtime. Git is decentralized by design. One node going down should not stop development. You don’t need a “central hub” to keep working.
I guess it’s all the other non-git stuff like issue tracking and other (unfortunately) centralized products on GitHub that causes disruption when they go down.
Weird how GitHub built itself around a distributed VC system and then made all its other services centralized.
Yes, you want to run automated builds, unit test, end to end test, UI tests, make it easy for testers to deploy specific versions / tags to internal server. Also kick off builds for iOS on mac computers. We use Teamcity for that.
Tracking of issues, feature and epics. Maybe also knowledge base / wiki. We use Jira.
I’m certain I’m up there in the 1% of users, or close to it, that are writing software daily in terms of consistent prolonged volume of work and work that is actually used by others over the past nearly 20 years based on user activity statistics I’ve collected.
I, too, am a fairly, but not immediate early user of GitHub. Despite GitHub’s poor metrics, I am still shipping, because writing software doesn’t require GitHub.
Hashimoto’s comments sound disturbed and I hope he finds some peace, but if he wasn’t who he was and you read these comments, you’d think this person had a problem. So, I think he does.
> because writing software doesn’t require GitHub.
If your workflow doesn't need the features that have had reliability problems over recent times (which includes some of the basic collaborative features), is GitHub even the right tool for your task? If not, then your judgement of others for complaining about the issues is presumptuous to the point of being somewhat obnoxious.
The post being replied to essentially said “I don't use the features that have been regularly broken in recent times, or where the features I do use [core git] were broken it luckily didn't affect me, so anyone thinking of leaving has a mental problem”.
Or to paraphrase the old joke:
Q: How many programmers does it take to change a lightbulb?
A: The sunlight through the windows here is working fine, if you can't see where you are that must be a “you” problem.
"Writing software doesn't require GitHub". Well, if they do not need whatever is specific to github and claim that someone who sorely lacks these features has a mental problem...
To be fair, the author of the post said the same thing. From the other thread on HN, they themselves said: "Nobody should cry over a SaaS, of all things. But GitHub has meant so much more to me than that (all laid out in the post). I have an unhealthy relationship with it. "
"Hashimoto’s comments sound disturbed and I hope he finds some peace..."
You don't often see the completely unhinged ad hominem 'faux mental health concern' segway here on Hacker News to try and paint someone as 'disturbed'. Thought that was mainly a Reddit thing people do.
>if he wasn’t who he was and you read these comments, you’d think this person had a problem. So, I think he does.
Honestly, I thought you were demeaning him to defend GitHub. But after reading the article, it does seem like his emotional reaction is not aligned with the situation. Just saying it openly for others who get the same impression I did.
That said, GitHub can be a full time job for many (handling and responding issues, reviews, and so on, depending on the size of the project). It's also not unheard of to have PR descriptions and comments as part of documentation rather than commit messages. So GitHub's availability is certainly extremely disruptive to many companies.
No, calling him "disturbed" was entirely out of line. As with everything it's not GitHub per se that is causing him consternation but the loss of what it represents:
> “Some people doom scroll social media. I've been doom scrolling GitHub issues since before that was a word,” he admitted. “On vacations I'd have bookmarks of different projects on GitHub I wanted to study. Not just source code, but OSS processes, how other maintainers react to difficult situations. Etc. Believe it or not, I like this.”
> “I've been angry about it. I've hurt people's feelings. I've been lashing out. Because GitHub is failing me, every single day, and it is personal. It is irrationally personal,” he wrote.
He is a passionate person whose identity is heavily invested in community and technical achievement. He's upset about his world being disrupted, not that GitHub as a product is failing him. This is how high-performing people are - they care deeply about their work. Could you imagine leveling these charges at a visual artist when they complain about a company messing with their favorite pencil? Or a saxophonist when their reed of choice is discontinued? It's petty and reductionist.
>Could you imagine leveling these charges at a visual artist when they complain about a company messing with their favorite pencil? Or a saxophonist when their reed of choice is discontinued? It's petty and reductionist.
If that person is lashing out and hurting people around him as a consequence, then yeah, I'd say that's not healthy. Unless he has a smaller barrier than I do for what he considers lashing out, and just refers to online complains.
> I, too, am a fairly, but not immediate early user of GitHub. Despite GitHub’s poor metrics, I am still shipping, because writing software doesn’t require GitHub.
GitHub’s downtime is a problem for issue tracking, PR merging, contributing and reviewing PRs, and more.
Your exact point was already pre-addressed in the blog post because it’s so predictable that some would completely miss the point. GitHub’s downtime isn’t about stopping you from writing code on your own machine.
> Hashimoto’s comments sound disturbed and I hope he finds some peace, but if he wasn’t who he was and you read these comments, you’d think this person had a problem. So, I think he does.
What a gross ad hominem about someone’s mental health. Please don’t do this.
With Ghostty being the latest project to leave GitHub, it does make me wonder who will leave next.
I don't expect everybody and their nan to leave GitHub by next wednesday and spin up their own Forgejo server, but I do think GitHub should be worried that people are finally looking to move away from them.
The entrenchment here is ridiculous. The average software engineer does not care at all about their vcs or their forge and their knowledge of both is extremely shallow.
For the people who want to do their work and get on with their life, it does not really matter that much.
Exactly. I run my own Gitea server, but put my stuff on Github, because that's where the people are. Self-hosting an MP3 is not the same as being on Spotify.
I keep my open source work on github for similar reasons. I don't expect nor want to deal with contributors having to create accounts on a self-hosted forge for every individual project they work on.
Zero to one nines of uptime. Shoving copilot in everyone’s faces rather than focusing on quality. Threatening to charge people for self-hosted runners.
> GitHub saw a 58% year-over-year increase in the number of incidents, reaching 109 reported cases:
> 17 of them were classified as major, leading to over 100 hours of total disruption.
> April stood out as the most turbulent month, with incidents accumulating to 330 hours and 6 minutes.
It's probably due to a more recent change, IE, focusing on features over stability. Or, it could be that there was some turnover in ops and someone who was a hawk about stability isn't there.
If I were to bet, there's probably a product manager or other leader who's just gung-ho on new features and loosing track of who their customers are and what their needs are.
IMO, it's probably a combination of factors. I get the feeling GitHub has no clear leadership by anybody who actually USES it. The priorities internally were almost certainly "get onto azure, shove copilot/AI down everybody's throat, and other generic "product driven" initiatives. The user-hostile move to react was done in a way that broke browser back-button functionality, especially in Pull Requests.
They don't/didn't care because what are you going to do?
On one hand, the free users shouldn't complain too much, though I get their anger. But the place I work is an enterprise paying customer and this is bullshit.
That can happen many times during a buyout. Some company buys a thing. The problem then is ownership of the thing. Who in the new company is going to own the 'make sure it stays good' problem. Sometimes with a buy out the people who were doing that may even stay at the company. But it is a matter of motivation. MS has a real serious problem. You can see the gaps where they have glued together at least 10 companies together and called it microsoft. They have a huge reputational risk issue. Where something breaking in the xbox div can have a negative impact on the tools division. Also the other way around. They lack focus on many items. They have needed a 'service pack 2' stop the presses moment and fix this mount everest of tech debt.
Definitely not, I remember some 4 years ago some random bug in a github-supported github-action and a comment in an issue saying: "I heard the team responsible for this action was laid off, don't expect a fix". This was shortly after the microsoft acquisition.
But the vibe coding BS probably made it 10 times worse.
They started with a hands off approach and then went hands on, I’m not sure but that ‘hands on’ timing is likely to happen shortly after the usual acquisition vesting period of 3 years when the old guard starts to leave.
Yes you are correct, ~4 years ago was when they had a lot of layoffs at microsoft and github. Initially after the acquisition it was mostly fine, but after the layoffs it was a noticeable degradation in service quality and reliability.
> But the vibe coding BS probably made it 10 times worse.
Yup, keep seeing this in various companies. Teams that were effective and did solid engineering now are more effective and does even better engineering. Teams that were effectively already just "boilerplate monkies" now produce a lot more code than before, but the quality is the same so effectively they're worse at contributing now than before, and take more shortcuts, not less.
From my point of view, agents are amplifiers, so if you usually build spaghetti projects, agents just help you do that faster, not avoid the spaghetti altogether. If you usually build well-designed stuff, they can help you put that together faster.
Agreed. In general the amount and variety of bugs introduced since everyone started vibing is worrying. It is probably a national security concern but I guess so is the economy tanking due to failed AI investments. Guess we will see
I'm not sure it's specific to vibe coding so much as the AI feature add rush. Every SAAS company is throwing more shit at the wall than I've ever seen, to the point where I'm actively avoiding some software because I don't want yet another new feature release pop-up when I log in.
Add in them being extremely high scale and critical infrastructure and it's easy to see where things can go wrong, vibe added code or not. I think we'd all prefer they have long slow roadmaps but clearly leadership thinks they're in a fight with the other AI companies to release the newest and bestest every day.
Perhaps they can’t help themselves out of habit, it is their nature.
The original red dog team that started azure is long gone and the general success of the cloud papers over all levels of incompetence so that the incompetence is now entrenched and unable to do better.
Cloud service providers have this unfortunate property where poor designs will make more money which makes it hard to maintain a culture of excellence. I tried to push a design change that would result in a 10x throughput for a certain product and was told that a 90% drop in usage is the last thing they want. I self host my own stuff with GitLab, so far not a single unplanned outage in 6 years.
Perhaps a Roman decimation is in order, whenever GitHub experiences an outage fire one GitHub employee at random. That should help get interests in line and allow for cross org cooperation. With 150 outages per year and a staff of 6,000 that amounts to 2.5% per year if no improvements are made.
> As place to run test? Build your own infrastructure. It's easier than ever. Why rely on blackboxes to do that?
I'm not saying this is horrible advice, but I think it conveniently ignores some major reasons people prefer cloud infrastructure in the first place.
Building your own infrastructure is the (relatively) easy part. Maintaining it, ensuring everything is patched, passing compliance audits, dealing with your own outages (I find it a bit ironic when everyone complains about cloud downtime, as if self hosted infrastructure has 99.999% uptime) is the expensive part. I'm not saying it's that hard to do, but once you get to a certain size it requires dedicated staff to manage, which is expensive.
In fact, if GitHub Actions were more reliable, I would hardly see any reason at all to host your own test infrastructure for most companies. The only reason hosting your own is more attractive is because GH Actions has such poor uptime.
Objectively we can not say "this is the beginning of the end of GitHub". Many people use GitHub daily still. But I think this right now is kind of a period of where GitHub is sliding into a bigger crisis. I am noticing this, among other things (including reddit and Hackernews in the last ~4 weeks becoming more and more critical of Microslop, 'xcuse me, Microsoft, which controls GitHub of course) when I look at the recent blog entries made by GitHub staff - the three latest being:
27th April 2026: Starting June 1, your Copilot usage will consume GitHub AI Credits.
28th April 2026: An update on GitHub availability
28th April 2026:Securing the git push pipeline bla bla bla critical remote code vulnerability bla bla bla
In particular the GitHub availability is interesting. When I read it, it almost sounded like a plea to "believe in us still, guys!!!". If you then read what the ghostty author wrote, something between the blog post from GitHub, and the outside world, no longer matches. GitHub is like on the titanic, they see the iceberg part above water and say "nothing to see here, this ship is invincible" (aka AI is invincible). Meanwhile everyone else already jumps off the ship ...
Titles can be genrated as well, just tell the LLM whether your readers love drama and witch hunts or are pseudo intellectuals. It'll come up with the correct framing.
I wonder if there's a place for something like matrix, but for repositories (or maybe matrix protocol can handle that?). A world where we have selfhosted, saas, etc, but all interlinked and searchable? Say I find a project on gitlab, I want to contribute to, I checkout to my personal server (or someone elses hosted), and raise a pr back to original repo.
I know it doesn't answer your question, just thinking aloud really.
Some alternative forge are built with decentralization in mind:
- forgejo [1] is working on ForgeFed [2], an extension of ActivityPub (the protocol made popular by Mastodon)
- tangled is built on top of ATproto (the protocol behind Bluesky) [3]
- radicle is rolling their own protocol, more peer-to-peer than federated [4]
- fossil is a broader all-in-one solution: not only a new Version Control System (a replacement for git), but also a forge (has the features of a forge: issues (bug-tracking), PRs, comments, wikis, ...) [5]
The other self-hosted forges such as gitlab, sourcehut, gitea don't have such a high level of decentralization and resilience. It does not make them less good, they are solving different problems, mainly being a easy-to-use self-hosted alternative to proprietary forges. For instance Gitea has Gitea Actions, which is designed to be compatible with GitHub Actions [6], while I don't think running CI/CD workflow in a decentralized way will the priority of projects like tangled or radicle.
This would be best, though it's a big ask, when everyone has gone from self hosting plus a sprinkling of cloud services, to only cloud services and no remembrance of how to self host.
I used to run a git server for all my main projects, and mirrored public ones on GitHub. Then the convenience of GitHub lured me in, to the point I shut down my private git server 5 years ago.
The selfhosted version has been getting heavier and heavier. Reluctant to call it bloat because it does seem to be features that conceivable someone needs but idk about their direction of travel
TLDR; don't use their SaaS offering, but probably better, yes, though who knows for how long.
I don't use their SaaS offering, but I've been using the self-hosted versions (mostly in CE flavours, but occasionally paid) since the days of the weird black and white fox, when gitlab looked very bootstrap-y. (The logo in question for the curious, but you can't unsee it: https://upload.wikimedia.org/wikipedia/commons/0/0a/Gitlab_l... )
Anyway, since LLMs for coding became a thing, coupled with the realities of running a business post-IPO, it's been a slow-ish downward trend for the self-hoster, as their offering gets more and more bloat that's likely easier to manage at scale for Gitlab, but stands in stark contrast to what it once was.
Little things are pilling up; components left for dead (we now have both TODO - which is an abysmal mess, and "assigned work items" - WHY?!), issue boards that remain messy, advertisements creeping into the CE version, increasingly wild hardware requirements... and some recent changes to their documentation that strike me as a dark pattern; very much a recognition that either you're an enterprise running your own paid GitLab, with some kind of support, or you're a SaaS user and don't GAF about the ops docs.
The transition to websockets was annoying. Mostly because it kinda-doesn't work and there's no decent polling fallback, which results in time wasted hitting refresh, in 2026, when everything worked fine from 201x-2025.
I've kept my eye out for alternatives, but Gitlab's CI/CD, and the self hosted runners, is still my preferred flavour hands down and continues to be the reason I stick around.
Overall, it's a much slower decline, but like all stock-market-centric companies, you can feel the writing on the wall. Nevertheless, we're in the middle of a Gitlab migration from one cloud provider to anther because we still haven't found something better. :/
M$ bought GH almost a decad ago. Instability is more due to mainstream traffic, combined (+ AI automation pushes on pointless repos) than some slow motion evil mastermind plot. Hanlon's razor applies.
Implying that MS would crater an org like GitHub within a few years is ascribing fundamental incompetence to a provably competent acquirer. Hanlon’s razor is precluded from use if there is contradictory logic.
GitHub’s mandated transfer to Azure was on hold for quite a while, Azure needed to be upgraded to handle them. Then the migration started, a multi-year effort, then the LLMs got integrated.
If you slap a 2.5 year window onto those phases you end up around a decade. That’s Enterprise timing, not slow motion.
And, critically, this is not some new emergent phenomenon today. Orgs are pulling the plug noticeably now because of massive service failures. The reputational damage has been growing over time.
Management changed, dumbish things happened, and consequences are rolling out. Even MS didn’t push the direct changes causing the issues (they did), they are responsible for avoiding and alleviating them (they didn’t). Change propagation takes time bound by the medium of transmission.
I feel like I’m out of the loop, or maybe I’m just not a super GitHub power user, but GitHub does pretty much what I expect and I haven’t had issues with it. All my git commands for GitHub just work and PRs and code reviews are the same as it’s always been.
Can someone explain what exactly is so bad now that leaving it entirely to use some new platform, even spinning up your own servers, is a reasonable alternative?
> which unsurprisingly reports a lower number than what Github themselves claim
Yes, because it throws all partial outages into one bucket, which is a dumb idea because the bigger a platform becomes with more loosely coupled components the more untainable high uptime number become.
Looking through the incidents, a good portion of them are regarding Copilot and Codespaces, two products I couldn't care about less. I do also have my regular run-ins with Github outages, but that website is just hyperbolic.
It's more if you use it for things beyond traditional dev work. GitHub Actions have become very unstable plus someone using it at this level where people are trying to download/ file issues/ send code up 24/7 would feel the pain of every outage, not just those that happen during one's working hours.
> and PRs and code reviews are the same as it’s always been
When they changed the PR view to not display all the changes at once, was the moment I said "I really need to find something else", not only is the platform very unreliable (at least from Spain), but most product changes they do are making the platform less efficient for me as a developer to use.
> Can someone explain what exactly is so bad now that leaving it entirely to use some new platform, even spinning up your own servers, is a reasonable alternative?
It always was, but network-effect of GitHub been large. But seemingly not infinite, at one point people start favoring "Being able to access platform" over "people can star my repository" it seems.
When using it every day (and especially when using Github Actions), there's something broken or half-broken nearly every day.
Most random errors in Github Actions (e.g. jobs just randomly failing or getting stuck and requiring a manual restart, or just being plain slow) also never show up on the Github Status page. The Github Actions VMs are also so slow that I'm seriously pondering setting up a cheap throw-away laptop at home as runner, that would easily be 10x faster. But then we're at playing IT admin at home :/
I’m a relatively casual user of GitHub and even I’ve run into availability issues when pushing up changes. Your comment makes it sound like you don’t use GitHub much at all or maybe are in some time zone or AZ that’s somehow insulated.
something reliably breaks 7 am PST (sometimes earlier) if you're using anything more than the git command line and sometimes (not too often, true) even the git protocol breaks.
Let's be honest there's an order of magnitude or more higher throughput volume of PR jitter and new repo bloat which makes this look like a viral digital native at scale.. couple that with being owned by one of the most scale immature companies on the planet ... of course it's a problem.
Get these folks off Azure and Cosmos DB (or whatever MSFT forces them to use) to something real and maybe you'd have a shot
It isn't surprising at all, Microsoft is doing a PE firm playbook with what they buy. You don't need to look much far, let's think about its biggest acquisition to date, Blizzard.
Blizzcon canceled. All of its IP barely got any love.
See what players think about the latest World of Warcraft patch. It's absolutely shit and broken. People say they fired the entire QA department since a few years back and since then the quality has just gone down.
They buy those businesses because they have nothing to do with that free cash flow, and for accounting reasons it makes sense to have them.
They didn't buy those businesses to develop it further and make it worth more.
Github will just become ever more irrelevant.
The key issue is that the US governments let those huge monopolies exist, and then use their money to buy other businesses and enshiftify them.
Unless that changes in the US, this will continue happening.
The Steam userbase would appear to disagree, with the recent reviews being mostly negative reviews (and the user reviews for Overwatch have hovered between mixed and negative for years now). And this doesn't appear to be from review bombing by some specific subset of players, the language breakdown shows reviews ranging from mixed to negative in all major language groups (English, Russian, Chinese, etc.).
> See what players think about the latest World of Warcraft patch. It's absolutely shit and broken
Crazy to me that the loot tables are still broken for some players/characters, they've tried to fix it several times now, and it's still not working - Since (some) endgame gear can only be obtained this way they've effectively soft locked those players/character out of the endgame.
Context: Some players are always receiving the same drops i.e. a belt. Rather than a varied loot table that gives them a chance to get items they need.
The next update warned us about the problem, so we ran the repair commands to put things in order again. This is a very small server with ~10 users and ~50 repos at most.
The wildest thing is that Azure Repos/Pipelines was better than this.
Their one caveat is also that they are still migrating it to Azure infra, so it's possible that's still in a one foot in one foot out kinda scenario, from what I've heard. But, this isn't inspiring confidence.
They mentioned they have some elasticsearch reindexing going to, I would guess they needed to regard or move stuff and something didn't work well. But if I understood it right they mentioned the PRs ES index which they didn't shared proof increased as the number of repos.
It might be anything. It seems they lost huge chunks due to layoffs and structural changes and MS which has the reverse golden Midas touch.
This is just pure speculation but also now there is no reason for MS to keep GH working. They absorbed all code they wanted. Now they can let it burn. Would be even better for them if that happened
But the 1% of repos that do have PRs and actions are likely going to be seeing enormous increases in volumes
I have been a part of two very large companies with self hosted gits and I've seen enough to be confident that this is an incredibly hard thing to manage
If you are at the limits of your architecture you may need to re-write things, and if you are rewriting things you can not arbitrarily speed that up by throwing dollars at it.
Anyone who has been part of that journey knows how painful it really is. A lot of times the systems to fail at all levels, and you have to redesign it from the first principles.
I have, but it depends what you mean.
Scenario 1: e-commerce SaaS (think: Amazon but whitelabel, and before CPUs even had AES instructions); Christmas was "fun".
Scenario 2: Video Games. The first day is the worst day when it comes to scale. Everything has to be flawless from day 0 and you get no warning as to what can go wrong.
Yet, somehow, I managed to make highly reliable systems.
In scenario 1; I had an existing system that had to scale up and down with load, this was before there was cloud and hardware had a 3-4 month lead time, so most of the effort was around optimising existing code, increasing job timeouts and "quenching" sources that were expensive. We used to also do so 'magic' when it came to serving requests that had session token or shopping cart cookie.
In scenario 2; we have a clean-room implementation and no legacy, which is a blessing but also a curse, there's no possibility to sample real usage: but you also don't need to worry about making breaking changes that are for the better. With legacy you have to figure out how to migrate to the new behaviour gradually.
So, pro's and con's... but it's not like handling huge load hasn't been done before, computers are faster than they ever have been and while my personal opinion is that operational knowledge is dying (due to general distain for people who actually used to run systems that scale: not just write hopeful "eventually consistent" yaml that they call deterministic) - the systems that do exist today hold your hand much better than they did for me 20 years ago.
And I ran 1% of web traffic with an ops team of 5 back then. So, idk what's going on here.
EDIT: Likely people are flagging me because I sound arrogant (or I hurt their feelings by talking bad about YAML-ops), but all I am doing is answering the question presented based on my experience.
[1] https://github.blog/news-insights/company-news/an-update-on-...
Large increase, but nothing existential.
GitHub would have obligations to MS investors to make accurate projections just like Microsoft itself, right?
And that start by layoffing your best engineers, I guess
[1] https://bloomberry.com/data/github/
I simply do not care.
Customers pay for a service. If they don't get what they paid for, it's perfectly reasonable and normal to go elsewhere.
Why do people on HN keep apologizing on the behalf of trillion-dollar companies?
Self hosted is probably the way to go, but hardware prices are insane currently.
"Someone" can cancel the migration. "Someone" just won't.
As you say it's limited, but that can be both good and bad.
https://news.ycombinator.com/item?id=47616242 https://isolveproblems.substack.com/p/how-microsoft-vaporize...
Yeah, tried that first, as I didn't want to move to Forgejo, I just wanted to keep working when I wanted to work.
The GitHub runner on Linux seemed fine, but the ones for macOS and Windows seemingly did something that made them a hell lot slower than even running VMs and then executing stuff inside those. I'm not sure what the runner is doing, if there is some built-in sandboxing or what not for those platforms, but it wasn't feasible to rely on for me as the builds took way too long time.
- SourceHut: https://sr.ht/~sircmpwn/sourcehut/
- Forgejo (used by Codeberg, etc.): https://forgejo.org/
Forgejo, on the other hand, is a drop-in replacement for GitHub.
At a customer we're implementing GitHub Actions and even on our Dev environment there are so many hickups with GitHub.
Might be pricy though.
What made it better than e.g GitLab?
https://www.youtube.com/watch?v=Js_Y_q-IkYo
MSFT should just create slophub.com they'd make money im sure.
As a private person I use it too as a free hoster, but from work I mainly know self hosted instances of jenkins and TeamCity.
When one firm is so dominant for so long, the question is more like “Why shouldn’t we just use GitHub like 80% of software companies do?”
The issues they’ve had are almost all very recent. Very few companies have reevaluated that decision, because moving a big and well-integrated part of infrastructure is a huge project that delivers no value to the business. Speculating that you’ll have fewer development-slowing outages is not the most convincing when asking for the budget to do this. Plus, self-hosted isn’t necessarily going to have better uptime - mistakes happen.
I think before Actions, it would have been a lot easier to migrate off GH though. You’d just need to change a lot of repo URLs and find a way to set up webhooks from the new place to poke CI. Now with Actions, a lot lives in GH and in a proprietary flavor that doesn’t just ‘lift and shift.’
Maybe, but I never heard about any company using github for internal projects in my real life. For me it was always to go to for open source projects.
Then again it's not a topic that often comes up in my developer circles.
It has been bad for at least 18mo, maybe longer? I recall multiple work impacting outages at my previous employer extending back into 2024. Maybe even earlier than that?
I guess, but it's not like you can't learn how to create a pullrequest on bitbucket or how to create an issue on jira as well within a work day?
That seems like the smallest thing when switching to a new company.
> The friction for adopting features like Actions is relatively low.
Yeah, I know almost nothing about the CI integration and actions when it comes to Github. Will look into it. Thank you.
Mostly boils down to marketing and easier to establish a community. Almost every developer has an account there, leading to network effects being much larger, so if you're a new FOSS project, finding contributors and getting your project in front of other's eyes is much easier when you're on GitHub compared to your own Forgejo instance.
With that said, I'd question if chasing "most external one-time contributors" or GitHub stars is the right way to actually run a FOSS project, personally I'd avoid thinking about those vanity-numbers as much as possible and focus on the project, code and contributors themselves.
But, I've literally heard those two arguments for "why GitHub" countless of times over the years.
But closed source companies surly don't need to establish a community?
I've tried so many times in the past to argue for self-hosted setup that you fully control if you can afford it, things just get so much smoother and if you're a software development company, you probably want to own the software development workflow E2E so you can actually ship as fast as you want.
I know developers love to build, but do you think:
1) self-hosting git provides any competitive edge to the business over letting someone manage it?
2) it provides so much value that you’re willing to fund engineers to build, secure, support this on an ongoing basis?
I’ve found the answer to those is No in both cases.
The same reason you wouldn’t build your own internal chat tool, you’d use Slack. And you wouldn’t bother self-hosting your own Jira or documentation.
Code hosting is code hosting, there’s no difference where it's hosted. There’s no slowdown in delivery with using GitHub - their March uptime was 99.5% which annoys some commenters but it’s fine. That’s 45 minutes downtime per month which is tolerable.
You would spend way more effort and money building a jenky self-hosted solution to end up with a worse result.
I guess it’s all the other non-git stuff like issue tracking and other (unfortunately) centralized products on GitHub that causes disruption when they go down.
Weird how GitHub built itself around a distributed VC system and then made all its other services centralized.
Yes, you want to run automated builds, unit test, end to end test, UI tests, make it easy for testers to deploy specific versions / tags to internal server. Also kick off builds for iOS on mac computers. We use Teamcity for that.
Tracking of issues, feature and epics. Maybe also knowledge base / wiki. We use Jira.
And pull requests. Bitbucket.
How do you find out what "github user #" you are?
Or look at the html source of your avatar, https://avatars.githubusercontent.com/u/2851?v=4
I, too, am a fairly, but not immediate early user of GitHub. Despite GitHub’s poor metrics, I am still shipping, because writing software doesn’t require GitHub.
Hashimoto’s comments sound disturbed and I hope he finds some peace, but if he wasn’t who he was and you read these comments, you’d think this person had a problem. So, I think he does.
If your workflow doesn't need the features that have had reliability problems over recent times (which includes some of the basic collaborative features), is GitHub even the right tool for your task? If not, then your judgement of others for complaining about the issues is presumptuous to the point of being somewhat obnoxious.
The post being replied to essentially said “I don't use the features that have been regularly broken in recent times, or where the features I do use [core git] were broken it luckily didn't affect me, so anyone thinking of leaving has a mental problem”.
Or to paraphrase the old joke:
Q: How many programmers does it take to change a lightbulb?
A: The sunlight through the windows here is working fine, if you can't see where you are that must be a “you” problem.
You don't often see the completely unhinged ad hominem 'faux mental health concern' segway here on Hacker News to try and paint someone as 'disturbed'. Thought that was mainly a Reddit thing people do.
Honestly, I thought you were demeaning him to defend GitHub. But after reading the article, it does seem like his emotional reaction is not aligned with the situation. Just saying it openly for others who get the same impression I did.
That said, GitHub can be a full time job for many (handling and responding issues, reviews, and so on, depending on the size of the project). It's also not unheard of to have PR descriptions and comments as part of documentation rather than commit messages. So GitHub's availability is certainly extremely disruptive to many companies.
> “Some people doom scroll social media. I've been doom scrolling GitHub issues since before that was a word,” he admitted. “On vacations I'd have bookmarks of different projects on GitHub I wanted to study. Not just source code, but OSS processes, how other maintainers react to difficult situations. Etc. Believe it or not, I like this.”
> “I've been angry about it. I've hurt people's feelings. I've been lashing out. Because GitHub is failing me, every single day, and it is personal. It is irrationally personal,” he wrote.
He is a passionate person whose identity is heavily invested in community and technical achievement. He's upset about his world being disrupted, not that GitHub as a product is failing him. This is how high-performing people are - they care deeply about their work. Could you imagine leveling these charges at a visual artist when they complain about a company messing with their favorite pencil? Or a saxophonist when their reed of choice is discontinued? It's petty and reductionist.
If that person is lashing out and hurting people around him as a consequence, then yeah, I'd say that's not healthy. Unless he has a smaller barrier than I do for what he considers lashing out, and just refers to online complains.
GitHub’s downtime is a problem for issue tracking, PR merging, contributing and reviewing PRs, and more.
Your exact point was already pre-addressed in the blog post because it’s so predictable that some would completely miss the point. GitHub’s downtime isn’t about stopping you from writing code on your own machine.
> Hashimoto’s comments sound disturbed and I hope he finds some peace, but if he wasn’t who he was and you read these comments, you’d think this person had a problem. So, I think he does.
What a gross ad hominem about someone’s mental health. Please don’t do this.
I don't expect everybody and their nan to leave GitHub by next wednesday and spin up their own Forgejo server, but I do think GitHub should be worried that people are finally looking to move away from them.
For the people who want to do their work and get on with their life, it does not really matter that much.
Gosh. We're going from software to slopware.
https://gitprotect.io/blog/devops-threats-unwrapped-mid-year...
That's from 2025 but it continued getting even worse this year.
https://github.com/SerJaimeLannister/who-left-gh/
Currently I know 3 projects, Ghostty, Bookstack-app, Hardenedbsd who have seemed to move away from Github from my understanding.
https://ziglang.org/news/migrating-from-github-to-codeberg/
If I were to bet, there's probably a product manager or other leader who's just gung-ho on new features and loosing track of who their customers are and what their needs are.
They don't/didn't care because what are you going to do?
On one hand, the free users shouldn't complain too much, though I get their anger. But the place I work is an enterprise paying customer and this is bullshit.
But the vibe coding BS probably made it 10 times worse.
The acquisition was 8 years ago.
Yup, keep seeing this in various companies. Teams that were effective and did solid engineering now are more effective and does even better engineering. Teams that were effectively already just "boilerplate monkies" now produce a lot more code than before, but the quality is the same so effectively they're worse at contributing now than before, and take more shortcuts, not less.
From my point of view, agents are amplifiers, so if you usually build spaghetti projects, agents just help you do that faster, not avoid the spaghetti altogether. If you usually build well-designed stuff, they can help you put that together faster.
Add in them being extremely high scale and critical infrastructure and it's easy to see where things can go wrong, vibe added code or not. I think we'd all prefer they have long slow roadmaps but clearly leadership thinks they're in a fight with the other AI companies to release the newest and bestest every day.
Embrace, extend, and extinguish.
What exactly are they extinguishing GitHub to the benefit of? Azure Repos?
The original red dog team that started azure is long gone and the general success of the cloud papers over all levels of incompetence so that the incompetence is now entrenched and unable to do better.
Cloud service providers have this unfortunate property where poor designs will make more money which makes it hard to maintain a culture of excellence. I tried to push a design change that would result in a 10x throughput for a certain product and was told that a 90% drop in usage is the last thing they want. I self host my own stuff with GitLab, so far not a single unplanned outage in 6 years.
Perhaps a Roman decimation is in order, whenever GitHub experiences an outage fire one GitHub employee at random. That should help get interests in line and allow for cross org cooperation. With 150 outages per year and a staff of 6,000 that amounts to 2.5% per year if no improvements are made.
As place to run test? Build your own infrastructure. It's easier than ever. Why rely on blackboxes to do that?
I'm not saying this is horrible advice, but I think it conveniently ignores some major reasons people prefer cloud infrastructure in the first place.
Building your own infrastructure is the (relatively) easy part. Maintaining it, ensuring everything is patched, passing compliance audits, dealing with your own outages (I find it a bit ironic when everyone complains about cloud downtime, as if self hosted infrastructure has 99.999% uptime) is the expensive part. I'm not saying it's that hard to do, but once you get to a certain size it requires dedicated staff to manage, which is expensive.
In fact, if GitHub Actions were more reliable, I would hardly see any reason at all to host your own test infrastructure for most companies. The only reason hosting your own is more attractive is because GH Actions has such poor uptime.
27th April 2026: Starting June 1, your Copilot usage will consume GitHub AI Credits.
28th April 2026: An update on GitHub availability
28th April 2026:Securing the git push pipeline bla bla bla critical remote code vulnerability bla bla bla
In particular the GitHub availability is interesting. When I read it, it almost sounded like a plea to "believe in us still, guys!!!". If you then read what the ghostty author wrote, something between the blog post from GitHub, and the outside world, no longer matches. GitHub is like on the titanic, they see the iceberg part above water and say "nothing to see here, this ship is invincible" (aka AI is invincible). Meanwhile everyone else already jumps off the ship ...
Such a one punch sentence that distills the message with a little bit of dramatic flair.
got damn, anyone got recommendations on how to write like a journalist ?
The rest of the article can be AI generated, don't fret about it.
It’s an intellectual and creative exercise that I kind of enjoy.
- forgejo [1] is working on ForgeFed [2], an extension of ActivityPub (the protocol made popular by Mastodon)
- tangled is built on top of ATproto (the protocol behind Bluesky) [3]
- radicle is rolling their own protocol, more peer-to-peer than federated [4]
- fossil is a broader all-in-one solution: not only a new Version Control System (a replacement for git), but also a forge (has the features of a forge: issues (bug-tracking), PRs, comments, wikis, ...) [5]
The other self-hosted forges such as gitlab, sourcehut, gitea don't have such a high level of decentralization and resilience. It does not make them less good, they are solving different problems, mainly being a easy-to-use self-hosted alternative to proprietary forges. For instance Gitea has Gitea Actions, which is designed to be compatible with GitHub Actions [6], while I don't think running CI/CD workflow in a decentralized way will the priority of projects like tangled or radicle.
[1] https://forgejo.org/faq/#is-there-a-roadmap-for-forgejo
[2] https://forgefed.org/
[3] https://tangled.org/
[4] https://radicle.dev/guides/protocol#federation-vs-peer-to-pe...
[5] https://fossil-scm.org/home/doc/trunk/www/index.wiki
[6] https://docs.gitea.com/usage/actions/overview
I used to run a git server for all my main projects, and mirrored public ones on GitHub. Then the convenience of GitHub lured me in, to the point I shut down my private git server 5 years ago.
Now I kinda regret that decision.
I don't use their SaaS offering, but I've been using the self-hosted versions (mostly in CE flavours, but occasionally paid) since the days of the weird black and white fox, when gitlab looked very bootstrap-y. (The logo in question for the curious, but you can't unsee it: https://upload.wikimedia.org/wikipedia/commons/0/0a/Gitlab_l... )
Anyway, since LLMs for coding became a thing, coupled with the realities of running a business post-IPO, it's been a slow-ish downward trend for the self-hoster, as their offering gets more and more bloat that's likely easier to manage at scale for Gitlab, but stands in stark contrast to what it once was.
Little things are pilling up; components left for dead (we now have both TODO - which is an abysmal mess, and "assigned work items" - WHY?!), issue boards that remain messy, advertisements creeping into the CE version, increasingly wild hardware requirements... and some recent changes to their documentation that strike me as a dark pattern; very much a recognition that either you're an enterprise running your own paid GitLab, with some kind of support, or you're a SaaS user and don't GAF about the ops docs.
The transition to websockets was annoying. Mostly because it kinda-doesn't work and there's no decent polling fallback, which results in time wasted hitting refresh, in 2026, when everything worked fine from 201x-2025.
I've kept my eye out for alternatives, but Gitlab's CI/CD, and the self hosted runners, is still my preferred flavour hands down and continues to be the reason I stick around.
Overall, it's a much slower decline, but like all stock-market-centric companies, you can feel the writing on the wall. Nevertheless, we're in the middle of a Gitlab migration from one cloud provider to anther because we still haven't found something better. :/
the fake surprise is so fake
GitHub’s mandated transfer to Azure was on hold for quite a while, Azure needed to be upgraded to handle them. Then the migration started, a multi-year effort, then the LLMs got integrated.
If you slap a 2.5 year window onto those phases you end up around a decade. That’s Enterprise timing, not slow motion.
And, critically, this is not some new emergent phenomenon today. Orgs are pulling the plug noticeably now because of massive service failures. The reputational damage has been growing over time.
Management changed, dumbish things happened, and consequences are rolling out. Even MS didn’t push the direct changes causing the issues (they did), they are responsible for avoiding and alleviating them (they didn’t). Change propagation takes time bound by the medium of transmission.
They're part of a trillion(!!!)-dollar company and want to be an essential part of SW dev workflow. They ought to act like it.
Can someone explain what exactly is so bad now that leaving it entirely to use some new platform, even spinning up your own servers, is a reasonable alternative?
So much so that they stopped posting uptime metrics for a while on their status page and an independent 3rd party created a website just for this:
https://mrshu.github.io/github-statuses/ (not my website)
According to that website, which unsurprisingly reports a lower number than what Github themselves claim, Github uptime is down to ~86%.
And if you work in the space, you know how terrible that is, but even more so for such a critical piece of infrastructure.
Yes, because it throws all partial outages into one bucket, which is a dumb idea because the bigger a platform becomes with more loosely coupled components the more untainable high uptime number become.
Looking through the incidents, a good portion of them are regarding Copilot and Codespaces, two products I couldn't care about less. I do also have my regular run-ins with Github outages, but that website is just hyperbolic.
When they changed the PR view to not display all the changes at once, was the moment I said "I really need to find something else", not only is the platform very unreliable (at least from Spain), but most product changes they do are making the platform less efficient for me as a developer to use.
> Can someone explain what exactly is so bad now that leaving it entirely to use some new platform, even spinning up your own servers, is a reasonable alternative?
It always was, but network-effect of GitHub been large. But seemingly not infinite, at one point people start favoring "Being able to access platform" over "people can star my repository" it seems.
Most random errors in Github Actions (e.g. jobs just randomly failing or getting stuck and requiring a manual restart, or just being plain slow) also never show up on the Github Status page. The Github Actions VMs are also so slow that I'm seriously pondering setting up a cheap throw-away laptop at home as runner, that would easily be 10x faster. But then we're at playing IT admin at home :/
And the most recent bug is "we added random code to some PR and some PR became invisible" which is a wtf bug which should be impossible to exist.
Get these folks off Azure and Cosmos DB (or whatever MSFT forces them to use) to something real and maybe you'd have a shot
Blizzcon canceled. All of its IP barely got any love.
See what players think about the latest World of Warcraft patch. It's absolutely shit and broken. People say they fired the entire QA department since a few years back and since then the quality has just gone down.
They buy those businesses because they have nothing to do with that free cash flow, and for accounting reasons it makes sense to have them.
They didn't buy those businesses to develop it further and make it worth more.
Github will just become ever more irrelevant.
The key issue is that the US governments let those huge monopolies exist, and then use their money to buy other businesses and enshiftify them.
Unless that changes in the US, this will continue happening.
https://blizzcon.com/en-us/event/
Crazy to me that the loot tables are still broken for some players/characters, they've tried to fix it several times now, and it's still not working - Since (some) endgame gear can only be obtained this way they've effectively soft locked those players/character out of the endgame.
Context: Some players are always receiving the same drops i.e. a belt. Rather than a varied loot table that gives them a chance to get items they need.
I think Diablo Immortal was likely the biggest success Blizz provided there