This seems like a tragedy of the commons -- GitHub is free after all, and it has all of these great properties, so why not? -- but this kind of decision making occurs whenever externalities are present.
My favorite hill to die on (externality) is user time. Most software houses spend so much time focusing on how expensive engineering time is that they neglect user time. Software houses optimize for feature delivery and not user interaction time. Yet if I spent one hour making my app one second faster for my million users, I can save 277 user hour per year. But since user hours are an externality, such optimization never gets done.
Externalities lead to users downloading extra gigabytes of data (wasted time) and waiting for software, all of which is waste that the developer isn't responsible for and doesn't care about.
> Most software houses spend so much time focusing on how expensive engineering time is that they neglect user time. Software houses optimize for feature delivery and not user interaction time.
I don’t know what you mean by software houses, but every consumer facing software product I’ve worked on has tracked things like startup time and latency for common operations as a key metric
This has been common wisdom for decades. I don’t know how many times I’ve heard the repeated quote about how Amazon loses $X million for every Y milliseconds of page loading time, as an example.
That's not how it works. The demand for engineering hours is an order of magnitude higher than the supply for any given game, you have to pick and choose your battles because there's always much, much more to do. It's not bizarre that nobody verified texture storage was being done in an optimal way at launch, without sacrificing load times at the altar or visual fidelity, particularly given the state the rest of the game was in. Who the hell has time to do that when there are crashes abound and the network stack has to be rewritten at a moments notice?
Gamedev is very different from other domains, being in the 90th percentile for complexity and codebase size, and the 99th percentile for structural instability. It's a foregone conclusion that you will rewrite huge chunks of your massive codebase many, many times within a single year to accomidate changing design choices, or if you're lucky, to improve an abstraction. Not every team gets so lucky on every project. Launch deadlines are hit when there's a huge backlog of additional stuff to do, sitting atop a mountain of cut features.
> It's not bizarre that nobody verified texture storage was being done in an optimal way at launch
The inverse, however, is bizarre. That they spent potentially quite a bit of engineering effort implementing the (extremely non-optimal) system that duplicates all the assets half a dozen time to potentially save precious seconds on spinning rust - all without validating it was worth implementing in the first place.
They talk about it being an optimization. They also talk about the bottleneck being level generation, which happens at the same time as loading from disk.
I worked in e-commerce SaaS in 2011~ and this was true then but I find it less true these days.
Are you sure that you’re not the driving force behind those metrics; or that you’re not self-selecting for like-minded individuals?
I find it really difficult to convince myself that even large players (Discord) are measuring startup time. Every time I start the thing I’m greeted by a 25s wait and a `RAND()%9` number of updates that each take about 5-10s.
As a discord user, it's the kind of platform that I would want to have running to receive notifications, sort of like the SMS of gaming.
A large part of my friend group use discord as the primary method of communication, even in an in person context (was at a festival a few months ago with a friend, and we would send texts over discord if we got split up) so maybe its not a common use case.
I have the same experience on windows. On the other hand, starting up discord on my cachyos install is virtually instant. So maybe there is also a difference between the platform the developers use and that their users use.
Contrary, every consumer facing product I've worked had no performance metrics tracked. And for enterprise software it was even worse as the end user is not the one who makes a decision to buy and use software.
>>what you mean by software houses
How about Microsoft? Start menu is a slow electron app.
> I don’t know how many times I’ve heard the repeated quote about how Amazon loses $X million for every Y milliseconds of page loading time, as an example.
This is true for sites that are trying to make sales. You can quantify how much a delay affects closing a sale.
For other apps, it’s less clear. During its high-growth years, MS Office had an abysmally long startup time.
Maybe this was due to MS having a locked-in base of enterprise users. But given that OpenOffice and LibreOffice effectively duplicated long startup times, I don’t think it’s just that.
You also see the Adobe suite (and also tools like GIMP) with some excruciatingly long startup times.
I think it’s very likely that startup times of office apps have very little impact on whether users will buy the software.
The issue here is not tracking, but developing. Like, how do you explain the fact that whole classes of software have gotten worse on those "key metrics"? (and that includes web-selling webpages)
I wouldn't call it tragedy of the commons, because it's not a commons. It's owned by microsoft. They're calculating that it's worth it for them, so I say take as much as you can.
Commons would be if it's owned by nobody and everyone benefits from its existence.
> so I say take as much as you can. Commons would be if it’s owned by nobody
This isn’t what “commons” means in the term ‘tragedy of the commons’, and the obvious end result of your suggestion to take as much as you can is to cause the loss of access.
Anything that is free to use is a commons, regardless of ownership, and when some people use too much, everyone loses access.
Finite digital resources like bandwidth and database sizes within companies are even listed as examples in the Wikipedia article on Tragedy of the Commons. https://en.wikipedia.org/wiki/Tragedy_of_the_commons
No, the word and its meaning both point to the fact that there’s no exclusive ownership of a commons. This is importantl, since ownership is associated with bearing the cost of usage (i.e., deprecation) which would lead an owner to avoid the tragedy of the commons. Ownership is regularly the solution to the tragedy (socialism didn’t work).
The behavior that you warn against is that of a free rider that make use of a positive externality of GitHub’s offering.
That is one meaning of “commons”, but not all of them, and you might be mistaking which one the phrase ‘tragedy of the commons’ is using.
“Commons can also be defined as a social practice of governing a resource not by state or market but by a community of users that self-governs the resource through institutions that it creates.”
The actual mechanism by which ownership resolves tragedy of the commons scenarios is by making the resource non-free, by either charging, regulating, or limiting access. The effect still occurs when something is owned but free, and its name is still ‘tragedy of the commons’, even when the resource in question is owned by private interests.
Ownership, I guess. The 2 parent comments are claiming that “tragedy of the commons” doesn’t apply to privately owned things. I’m suggesting that it does.
Edit: oh, I do see what you mean, and yes I misunderstood the quote I pulled from WP - it’s talking about non-ownership. I could pick a better example, but I think that’s distracting from the fact that ‘tragedy of the commons’ is a term that today doesn’t depend on the definition of the word ‘commons’. It’s my mistake to have gotten into any debate about what “commons” means, I’m only saying today’s usage and meaning of the phrase doesn’t depend on that definition, it’s a broader economic concept.
Still, because reality doesn't respect boundaries of human-made categories, and because people never define their categories exhaustively, we can safely assume that something almost-but-not-quite like a commons, is subject to an almost-but-not-quite tragedy of the commons.
That seems to assume some sort of… maybe unfounded linearity or something? I mean, I’m not sure I agree that GitHub is nearly a commons in any sense, but let’s put that aside as a distraction…
The idea of the tragedy of the commons relies on this feedback loop of having these unsustainably growing herds (growing because they can exploit the zero-cost-to-them resources of the commons). Feedback loops are notoriously sensitive to small parameter changes. MS could presumably impose some damping if they wanted.
> That seems to assume some sort of… maybe unfounded linearity or something
Not linearity but continuity, which I think is a well-founded assumption, given that it's our categorization that simplifies the world by drawing sharp boundaries where no such bounds exist in nature.
> The idea of the tragedy of the commons relies on this feedback loop of having these unsustainably growing herds (growing because they can exploit the zero-cost-to-them resources of the commons)
AIUI, zero-cost is not a necessary condition, a positive return is enough. Fishermen still need to buy fuel and nets and pay off loans for the boats, but as long as their expected profit is greater than that, they'll still overfish and deplete the pond, unless stronger external feedback is introduced.
Given that the solution to tragedy of the commons is having the commons owned by someone who can boss the users around, GitHub being owned by MS makes it more of a commons in practice, not less.
And indeed MS/GitHub does impose some "damping" in the form of things like API request throttling, CPU limits on CI, asking Homebrew not to use shallow cloning, etc. And those limits are one of the reasons given why using git as a database isn't good.
There is an analogy in the sense that for the users a resource is, for certain practical intents and purposes, functionally common. Social media is like this as well.
But I would make the following clarifications:
1. A private entity is still the steward of the resource and therefore the resource figures into the aims, goals, and constraints of the private entity.
2. The common good is itself under the stewardship of the state, as its function is guardian of the common good.
3. The common good is the default (by natural law) and prior to the private good. The latter is instituted in positive law for the sake of the former by, e.g., reducing conflict over goods.
> There is an analogy in the sense that for the users a resource is, for certain practical intents and purposes, functionally common. Social media is like this as well.
I think it's both simpler and deeper than that.
Governments and corporations don't exist in nature. Those are just human constructs, mutually-recursive shared beliefs that emulate agents following some rules, as long as you don't think too hard about this.
"Tragedy of the commons" is a general coordination problem. The name itself might've been coined with some specific scenarios in mind, but for the phenomenon itself, it doesn't matter what kind of entities exploit the "commons"; the "private" vs. "public" distinction itself is neither a sharp divide, nor does it exist in nature. All that matters is that there's some resource used by several independent parties, and each of them finds it more beneficial to defect than to cooperate.
In a way, it's basically a 3+-player prisonner's dilemma. The solution is the same, too: introducing a party that forces all other parties to cooperate. That can be a private or public or any other kind of org taking ownership of the commons and enforcing quotas, or in case of prisonners, a mob boss ready to shoot anyone who defects.
The whole notion of the "tragedy of the commons" needs to be put to rest. It's an armchair thought experiment that was disproven at the latest in the 90s by Elinor Ostrom with actual empirical evidence of commons.
The "tragedy", if you absolutely need to find one, is only for unrestricted, free-for-all commons, which is obviously a bad idea.
A high-trust community like a village can prevent a tragedy of the commons scenario. Participants feel obligations to the community, and misusing the commons actually does have real downsides for the individual because there are social feedback mechanisms. The classic examples like people grazing sheep or cutting wood are bad examples that don't really work.
But that doesn't mean the tragedy of the commons can't happen in other scenarios. If we define commons a bit more generously it does happen very frequently on the internet. It's also not difficult to find cases of it happening in larger cities, or in environments where cutthroat behavior has been normalized
> A high-trust community like a village can prevent a tragedy of the commons scenario. Participants feel obligations to the community, and misusing the commons actually does have real downsides for the individual because there are social feedback mechanisms.
That works while the size of the community is ~100-200 people, when everyone knows everyone else personally. It breaks down rapidly after that. We compensate for that with hierarchies of governance, which give rise to written laws and bureaucracy.
New tribes break off old tribes, form alliances, which form larger alliances, and eventually you end up with countries and counties and vovoidships and cities and districts and villages, in hierarchies that gain a level per ~100x population increase.
This is sociopolitical history of the world in a nutshell.
"and eventually you end up with countries and counties and vovoidships and cities and districts and villages, in hierarchies that gain a level per ~100x population increase."
You say it like this is a law set in stone, because this is what happened im history, but I would argue it happened under different conditions.
Mainly, the main advantage of an empire over small villages/tribes is not at all that they have more power than the villages combined, but that they can concentrate their power where it is needed. One village did not stand a chance against the empire - and the villages were not coordinated enough.
But today we would have the internet for better communication and coordination, enabling the small entieties to coordinate a defense.
Well, in theory of course. Because we do not really have autonomous small states, but are dominated by the big players. And the small states have mowtly the choice which block to align with, or get crushed. But the trend might go towards small again.
(See also cheap drones destroying expensive tanks, battleships etc.)
Internet is working exactly the opposite way to what your describing - it's making everything more centralized.
Once we had several big media companies in each country and in each big city. Now we have Google and Facebook and tik tok and twitter and then the "whatevers".
Yes, but there is a difference between having the choice of joining FB or not having a choice at all when the empire comes to claim you (like in Ukraine).
> That works while the size of the community is ~100-200 people,
Yet we regularly observe that working with millions of people; we take care of our young, we organize, when we see that some action hurt our environment we tend to limit its use.
It's not obvious why some societies break down early and some go on working.
> Yet we regularly observe that working with millions of people; we take care of our young, we organize, when we see that some action hurt our environment we tend to limit its use.
That's more like human universals. These behaviors generally manifest to smaller or larger degree, depending on how secure people feel. But those are extremely local behaviors. And in fact, one of them is exactly the thing I'm talking about:
> we organize
We organize. We organize for many reasons, "general living" is the main one but we're mostly born into it today (few got the chance to be among the founding people of a new village, city or country). But the same patterns show up in every other organizations people create, from companies to charities, from political interests groups to rural housewives' circles -- groups that grow past ~100 people split up. Sometimes into independent groups, sometimes into levels of hierarchies. Observe how companies have regional HQs and departments and areas and teams; religious groups have circuits and congregations, etc. Independent organizations end up creating joint ventures and partnerships, or merge together (and immediately split into a more complex internal structure).
The key factor here is, IMO, for everyone in a given group to be in regular contact with everyone else. Humans are well evolved for living in such small groups - we come with built-in hardware and software to navigate complex interpersonal situations. Alignment around shared goals and implicit rules is natural at this scale. There's no space for cheaters and free-loaders to thrive, because everyone knows everyone else - including the cheater and their victims. However, once the group crosses this "we're all a big family, in it together" size, coordinating everyone becomes hard, and free-loaders proliferate. That's where explicit laws come into play.
This pattern repeats daily, in organizations people create even today.
I get the feeling it's the combination of Schelling points and surplus. If everyone else is being pro-social, i.e. there is a culture of it, and the people aren't so hard up that they can reasonably afford to do the same, then that's what happens, either by itself (Hofstadter's theory of superrationality) or via anything so much as light social pressure.
But if a significant fraction of the population is barely scraping by then they're not willing to be "good" if it means not making ends meet, and when other people see widespread defection, they start to feel like they're the only one holding up their end of the deal and then the whole thing collapses.
This is why the tendency for people to propose rent-seeking middlemen as a "solution" to the tragedy of the commons is such a diabolical scourge. It extracts the surplus that would allow things to work more efficiently in their absence.
I’ve heard stories from communist villages where everyone knew everyone. Communal parks and property was not respected and frequently vandalized or otherwise neglected because it didn’t have an owner and it was treated as something for someone else to solve.
It’s easier to explain in those terms than assumptions about how things work in a tribe.
> A high-trust community like a village can prevent a tragedy of the commons scenario.
No it does not. This sentiment, which many people have, is based on a fictional and idealistic notion of what small communities are like having never lived in such communities.
Empirically, even in high-trust small villages and hamlets where everyone knows everyone, the same incentives exist and the same outcomes happen. Every single time. I lived in several and I can't think of a counter-example. People are highly adaptive to these situations and their basic nature doesn't change because of them.
Even here, the state is the steward of the common good. It is a mistaken notion that the state only exists because people are bad. Even if people were perfectly conscientious and concerned about the common good, you still need a steward. It simply wouldn’t be a steward who would need to use aggressive means to protect the common good from malice or abuse.
Ostrom showed that it wasn't necessarily a tragedy, if tight groups involved decided to cooperate. This common in what we call "trust-based societies", which aren't universal.
Nonetheless, the concept is still alive, and anthropic global warming is here to remind you about this.
She not “disprove” the existence of the tragedy of the commons. What she established was that controlling the commons can be done communally rather than through privatization or through government ownership.
Communal management of a resource is still government, though. It just isn’t central government.
The thesis of the tragedy of the commons is that an uncontrolled resource will be abused. The answer is governance at some level, whether individual, collective, or government ownership.
> The "tragedy", if you absolutely need to find one, is only for unrestricted, free-for-all commons, which is obviously a bad idea.
Right. And that’s what people are usually talking about when they say “tragedy of the commons”.
People invoke the tragedy of the commons in bad faith to argue for privatization because “the alternative is communism”. i.e. Either an individual or the government has to own the resource.
This is of course a false dichotomy because governance can be done at any level.
It also seems to omit the possibility that the thing could be privately operated but not for profit.
Let's Encrypt is a solid example of something you could reasonably model as "tragedy of the commons" (who is going to maintain all this certificate verification and issuance infrastructure?) but then it turns out the value of having it is a million times more than the cost of operating it, so it's quite sustainable given a modicum of donations.
Free software licenses are another example in this category. Software frequently has a much higher value than development cost and incremental improvements decentralize well, so a license that lets you use it for free but requires you to contribute back improvements tends to work well because then people see something that would work for them except for this one thing, and it's cheaper to add that themselves or pay someone to than to pay someone who has to develop the whole thing from scratch.
It has the same effect though. A few bad actors using this “free” thing can end up driving the cost up enough that Microsoft will have to start charging for it.
The jerks get their free things for a while, then it goes away for everyone.
I think the jerks are the ones who bought and enshittified GitHub after it had earned significant trust and become an important part of FOSS infrastructure.
Scoping it to a local maxima, the only thing worse than git is github. In an alternate universe hg won the clone wars and we are all better off for it.
Why do you blame MS for predictably doing what MS does, and not the people who sold that trust & FOSS infra to MS for a profit? Your blame seems misplaced.
And out of curiosity, aside from costing more for some people, what’s worse exactly? I’m not a heavy GitHub user, but I haven’t really noticed anything in the core functionality that would justify calling it enshittified.
Probably the worst thing MS did was kill GitHub’s nascent CI project and replace it with Azure DevOps. Though to be fair the fundamental flaws with that approach didn’t really become apparent for a few years. And GitHub’s feature development pace was far too slow compared to its competitors at the time. Of course GitHub used to be a lot more reliable…
Now they’re cramming in half baked AI stuff everywhere but that’s hardly a MS specific sin.
MS GitHub has been worse about DMCA and sanctioned country related takedowns than I remember pre acquisition GitHub being.
I don't blame them uniquely. I think it's a travesty the original GitHub sold out, but it's just as predictable. Giant corps will evilly make the line go up, individual regular people will have a finite amount of money for which they'll give up anything and everything.
As for how the site has become worse, plenty of others have already done a better job than I could there. Other people haven't noticed or don't care and that's ok too I guess.
Right. Microsoft could easily impose a transfer fee if over a certain amount that would allow “normal” OSS development of even popular software to happen without charge while imposing a cost to projects that try to use GitHub like a database.
> Software houses optimize for feature delivery and not user interaction time. Yet if I spent one hour making my app one second faster for my million users, I can save 277 user hour per year. But since user hours are an externality, such optimization never gets done.
Google and amazon are famous for optimizing this. Its not an externality to them though, even 10s of ms can equal an extra sale.
That said, i don't think its fair to add time up like that. Saving 1 second for 600 people is not the same as saving 10 minutes for 1 person. Time in small increments does not have the same value as time in large increments.
> Most software houses spend so much time focusing on how expensive engineering time is that they neglect user time. Software houses optimize for feature delivery and not user interaction time. Yet if I spent one hour making my app one second faster for my million users, I can save 277 user hour per year. But since user hours are an externality, such optimization never gets done.
This is what people mean about speed being a feature. But "user time" depends on more than the program's performance. UI design is also very important.
If you think too hard about this, you come back around to Alan Kay's quote about how people who are really serious about software should build their own hardware. Web applications, and in general loading pretty much anything over the network, is a horrible, no-good, really bad user experience, and it always will be. The only way to really respect the user is with native applications that are local-first, and if you take that really far, you build (at the very least) peripherals to make it even better.
The number of companies that have this much respect for the user is vanishingly small.
>> The number of companies that have this much respect for the user is vanishingly small.
I think companies shifted to online apps because #1 it solved the copy protection problem. FOSS apps are not in any hurry to become centralized because they dont care about that issue.
Local apps and data are a huge benefit of FOSS and I think every app website should at least mention that.
Another important reason to move to online applications is that you can change the terms of the deal at any time. This may sound more nefarious than it needs to be, it just means you do not have to commit fully to your licensing terms before the first deal is made, which is tempting for just about anyone.
Your browser is acting like a condom, in that respect (pun not intended).
Yes, there are many cases when condoms are indicative of respect between parties. But a great many people would disagree that the best, most respectful relationships involve condoms.
> Meta
Does not sell or operate respectful software. I will agree with you that it's best to run it in a browser (or similar sandbox).
Desktop operating systems really dropped the ball on protecting us from the software we run. Even mobile OSs are so-so. So the browser is the only protection we reasonably have.
Yes, amen. The more invasive and abusive software gets, the less I want it running on my machine natively. Native installed applications for me now are limited only to apps I trust, and even those need to have a reason to be native apps rather than web apps to get a place in my app drawer
You mean you’d rather run unverified scripts using a good order of magnitude more resources with a slower experience and have an entire sandboxing contraption to keep said unverified scripts from doing anything to your machine…
I know the browser is convenient, but frankly, its been a horror show of resource usage and vulnerabilities and pathetic performance
The #1 reason the web experience universally sucks today is because companies add an absurd amount of third-party code on their pages for tracking, advertisement, spying on you or whatever non-essential purpose. That, plus an excessive/unnecessary amount of visual decoration.
The idea that somehow those companies would respect your privacy were they running a native app is extremely naive.
We can already see this problem on video games, where copy protection became resource-heavy enough to cause performance issues.
I don't think most software houses spend enough time even focusing on engineering time. CI pipelines that take tens of minutes to over an hour, compile times that exceed ten seconds when nothing has changed, startup times that are much more than a few seconds. Focus and fast iteration are super important to writing software and it seems like a lot of orgs just kinda shrug when these long waits creep into the development process.
Let’s make a thought experiment. Suppose that I have a data format and a store that resolves the issues in the post. It is like git meets JSON meets key-value. https://github.com/gritzko/go-rdx
What is the probability of it being used? About 0%, right? Because git is proven and GitHub is free. Engineering aspects are less important.
Sorry, I am turned off by the CRDT in there. It immediately smells of overengineering to me. Not that I believe git is a better database. But why not just SQL?
I would argue LWW is the opposite of a merge. It is better to immediately know at the time of writing that there is a conflict. CRDTs either solve or (in this case) don't solve a problem that doesn't really exist, especially for package managers.
Git solves that problem and it definitely exists. Speaking of package managers, it really depends. Like, can we use one SQLite file for that? So easy, why no one is doing that?
About apps done by software houses, even though we should strive for doing good job and I agree with sentiment...
First argument would be - take at least two 0's from your estimation, most of applications will have maybe thousands of users, successful ones will maybe run with 10's of thousands. You might get lucky to work on application that has 100's of thousands, millions of users and you work in FAANG not a typical "software house".
Second argument is - most users use 10-20 apps in typical workday, your application is most likely irrelevant.
Third argument is - most users would save much more time learning how to use applications (or to use computer) properly they use on daily basis, than someone optimizing some function from 2s to 1s. But of course that's hard because they have 10-20 apps daily plus god know how many other not on daily basis. Though still I see people doing super silly stuff in tools like Excel or even not knowing copy paste - so not even like any command line magic.
This was something that I heavily focused on for my feature area a year ago - new user sign up flow. But the decreased latency was really in pursuit of increased activation and conversion. At least the incentives aligned briefly.
> Externalities lead to users downloading extra gigabytes of data (wasted time) and waiting for software, all of which is waste that the developer isn't responsible for and doesn't care about.
This is perfectly sensible behavior when the developers are working for free, or when the developers are working on a project that earns their employer no revenue. This is the case for several of the projects at issue here: Nix, Homebrew, Cargo. It makes perfect sense to waste the user's time, as the user pays with nothing else, or to waste Github's bandwidth, since it's willing to give bandwidth away for free.
Where users pay for software with money, they may be more picky and not purchase software that indiscriminately wastes their time.
The user hour analogy sounds weird tho, 1s feels 1s regardless how many users you have. It's like the classic Asian teachers' logic of "if you come in 1 min late you are wasting N minutes for all of us in this class." It just does not stack like that.
If the class takes N minutes and one person arrives 1 minute late, and the rest of the class is waiting for them, it does stack. Every one of those students lost a minute. Far worse than one student losing one minute.
>Yet if I spent one hour making my app one second faster for my million users, I can save 277 user hour per year. But since user hours are an externality, such optimization never gets done.
I have never been convinced by this argument. The aggregate number sounds fantastic but I don't believe that any meaningful work can be done by each user saving 1 second. That 1 second (and more) can simply be taken by me trying to stretch my body out.
OTOH, if the argument is to make software smaller, I can get behind that since it will simply lead to more efficient usage of existing resources and thus reduce the environmental impact.
But we live in a capitalist world and there needs to be external pressure for change to occur. The current RAM shortage, if it lasts, might be one of them. Otherwise, we're only day dreaming for a utopia.
Time saved to increased productivity or happiness or whatever is not linear but a step function. Saving one second doesn’t help much, but there is a threshold (depending on the individual) where faster workflows lead to a better experience. It does make a difference whether a task takes a minute or half a second, at least for me.
But there isn't just one company deciding externalizing cost on the rest of us is a great way to boost profit since it costs them very little. Especially for a monopoly like YouTube that can decide that eating up your battery is fine if it saves them a few cents in bandwidth costs.
Not all of those externalizing companies abuse your time but whatever they abuse can be expressed in a $ amount and $ can be converted to a median's person time via median wage. Hell, free time is more valuable than whatever you produce during work.
Say all that boils down to companies collectively stealing 20 minutes of your time each day. 140 minutes each week. 7280 (!) minutes each year, which is 5.05 days, which makes it almost a year over the course of 70 years.
So yeah, don't do what you do and sweettalk the fact that companies externalize costs (private the profits, socialize the losses). They're sucking your blood.
One second is long enough that it can put a user off from using your app though. Take notifications on phones for example. I know several people who would benefit from a habitual use of phone notifications, but they never stick to using them because the process of opening (or switching over to) the notification app and navigating its UI to leave a notification takes too long. Instead they write a physical sticky note, because it has a faster "startup time".
The article mentions that most of these projects did use GitHub as a central repo out of convenience so there’s that but they could also have used self-hosted repos.
I'm not sure whether this question was asked in good faith, but is actually a damn good one.
I've looked into self hosting and git repo that has horizontal scalability, and it is indeed very difficult. I don't have the time to detail it in a comment here, but for anyone who is curious it's very informative to look at how GitLab handled this with gitaly. I've also seen some clever attempts to use object storage, though I haven't seen any of those solutions put heavily to the test.
I'd love to hear from others about ideas and approaches they've heard about or tried
These days, people solve similar problems by wrapping their data in an OCI container image and distribute it through one of the container registries that do not have a practically meaningful pull rate limit. Not really a joke, unfortunately.
Even Amazon encourages this, probably not intentionally, more like as a bandaid for bad EKS config that people can do by mistake, but still - you can pull 5 terabytes from ECR for free under their free tier each month.
Explain to me how you self-host a git repo without spending any money and having no budget which is accessed millions of time a day from CI jobs pulling packages.
If people depend on remote downloads from different companies for their CI pipelines they’re doing it wrong. Every sensible company sets up a mirror or at least a cache on infra that they control. Rate limiting downloads is the natural course of action for the provider of a package registry. Once you have so many unique users that even civilized use of your infrastructure becomes too much you can probably hire a few people to build something more scalable.
> Most software houses spend so much time focusing on how expensive engineering time is that they neglect user time. Software houses optimize for feature delivery and not user interaction time.
Oh no no no. Consumer-facing companies will burn 30% of your internal team complexity budget on shipping the first "frame" of your app/website. Many people treat Next as synonymous with React, and Next's big deal was helping you do just this.
LLM's obviously can't do it all, and they still have severe areas of weakness where they can't replace humans, but there are definitely a lot of areas where they really can now. I've seen it first hand. I've even experienced it first hand. There are a couple of services that I wrote years ago that were basically parked in maintenance mode because they weren't worth investing time in, and we just dealed with some of the annoyances and bugs. With the latest LLM's, over the last couple of months I've been able to resurrect them and fix a lot of bugs and even add some wanted features in just a few hours. It really is quite incredible and scary at the same time.
Also in case you're not aware, accusing people of shilling or astroTurfing is against the hacker news guidelines
I’m building Cargo/UV for C. Good article. I thought about this problem very deeply.
Unfortunately, when you’re starting out, the idea of running a registry is a really tough sell. Now, on top of the very hard engineering problem of writing the code and making a world class tool, plus the social one of getting it adopted, I need to worry about funding and maintaining something that serves potentially a world of traffic? The git solution is intoxicating through this lense.
Fundamentally, the issue is the sparse checkouts mentioned by the author. You’d really like to use git to version package manifests, so that anyone with any package version can get the EXACT package they built with.
But this doesn’t work, because you need arbitrary commits. You either need a full checkout, or you need to somehow track the commit a package version is in without knowing what hash git will generate before you do it. You have to push the package update and then push a second commit recording that. Obviously infeasible, obviously a nightmare.
Conan’s solution is I think just about the only way. It trades the perfect reproduction for conditional logic in the manifest. Instead of 3.12 pointing to a commit, every 3.x points to the same manifest, and there’s just a little logic to set that specific config field added in 3.12. If the logic gets too much, they let you map version ranges to manifests for a package. So if 3.13 rewrites the entire manifest, just remap it.
I have not found another package manager that uses git as a backend that isn’t a terrible and slow tool. Conan may not be as rigorous as Nix because of this decision but it is quite pragmatic and useful. The real solution is to use a database, of course, but unless someone wants to wire me ten thousand dollars plus server costs in perpetuity, what’s a guy supposed to do?
Think about the article from a different perspective: several of the most successful and widely used package managers of all time started out using Git, and they successfully transitioned to a more efficient solution when they needed to.
Every package has its own git repository which for binary packages contains mostly only the manifest. Sources and assets, if in git, are usually in separate repos.
This seems to not have the issues in the examples given so far, which come from using "monorepos" or colocating. It also avoids the "nightmare" you mention since any references would be in separate repos.
The problematic examples either have their assets and manifests colocated, or use a monorepo approach (colocating manifests and the global index).
Before you managed to build a popular tool it is unlikely that you need to serve many users. Directly going for something that can serve the world is probably premature
For most software, yes. But the value of a package manager is in its adoption. A package manager that doesn’t run up against these problems is probably a failure anyway.
Is there a reason the users must see all of the historic data too? Why not just have a post-commit hook render the current HEAD to static files, into something like GitHub Pages?
That can be moved elsewhere / mirrored later if needed, of course. And the underlying data is still in git, just not actively used for the API calls.
It might also be interesting to look at what Linux distros do, like Debian (salsa), Fedora (Pagure), and openSUSE (OBS). They're good for this because their historic model is free mirrors hosted by unpaid people, so they don't have the compute resources.
I'm not OP but I'll guess .... lock files with old versions of libs in. The latest version of a library may be v2 but if most users are locked to v1.267.34 you need all the old versions too.
However a lot of the "data in git repositories" projects I see don't have any such need, and then ...
> Why not just have a post-commit hook render the current HEAD to static files, into something like GitHub Pages?
... is a good plan. Usually they make a nice static website with the data that's easy for humans to read though.
> Unfortunately, when you’re starting out, the idea of running a registry is a really tough sell. Now, on top of the very hard engineering problem of writing the code and making a world class tool, plus the social one of getting it adopted, I need to worry about funding and maintaining something that serves potentially a world of traffic? The git solution is intoxicating through this lense.
So you need a decentralized database? Those exist (or you can make your own, if you're feeling ambitious), probably ones that scale in different ways than git does.
Please share. I’m interested in anything that’s roughly as simple as implementing a centralized registry, is easily inspected by users (preferably with no external tooling), and is very fast.
It’s really important that someone is able to search for the manifest one of their dependencies uses for when stuff doesn’t work out of the box. That should be as simple as possible.
I’m all ears, though! Would love to find something as simple and good as a git registry but decentralized
I wonder how meson wraps' story fits with this. They used not to, but now they're throwing everything into a single repository [0]. I wonder about the motivation and how it compares to your project.
> The problem was that go get needed to fetch each dependency’s source code just to read its go.mod file and resolve transitive dependencies.
This article is mixing two separate issues. One is using git as the master database storing the index of packages and their versions. The other is fetching the code of each package through git. They are orthogonal; you can have a package index using git but the packages being zip/tar/etc archives, you can have a package index not using git but each package is cloned from a git repository, you can have both the index and the packages being git repositories, you can have neither using git, you can even not have a package index at all (AFAIK that's the case for Go).
I think the article takes issue not with fetching the code, but with fetching the go.mod file that contains index and dependency information. That’s why part of the solution was to host go.mod files separately.
Honestly I think the article is a bit ahistorical on this one. ‘go get’ pulls the source code into a local cache so it can build it, not just to fetch the go.mod file. If they were having slow CI builds because they didn’t or couldn’t maintain a filesystem cache, that’s annoying, but not really a fault in the design. Anyway, Go improved the design and added an easy way to do faster, local proxies. Not sure what the critique is here. The Go community hit a pain point and the Go team created an elegant solution for it.
Do the easy thing while it works, and when it stops working, fix the problem.
Julia does the same thing, and from the Rust numbers on the article, Julia has about 1/7th the number of packages that Rust does[1] (95k/13k = 7.3).
It works fine, Julia has some heuristics to not re-download it too often.
But more importantly, there's a simple path to improve. The top Registry.toml [1] has a path to each package, and once donwloading everything proves unsustainable you can just download that one file and use it to download the rest as needed. I don't think this is a difficult problem.
I believe Julia only uses the Git registry as an authoritative ledger where new packages are registered [1]. My understanding is that as you mention, most clients don't access it, and instead use the "Pkg Protocol" [2] which does not use Git.
I've mostly heard FAFO used to describe something obviously stupid.
Building on the same thing people use for code doesn't seem stupid to me, at least initially. You might have to migrate later if you're successful enough, but that's not a sign of bad engineering. It's just building for where you are, not where you expect to be in some distant future
This is too naive. Fixing the problem costs a different amount depending on when you do it. The later you leave it the more expensive it becomes. Very often to the point where it is prohibitively expensive and you just put up with it being a bit broken.
This article even has an example of that - see the vcpkg entry.
I didn't know whether they were supposed to be within the developer's control (in which case the only real concern is whether someone else has already used the id), or generated by the system (in which case a developer demonstrated manipulation of that system).
Apparently it is the former, and most developers independently generate random IDs because it's easy and is extremely unlikely to result in collisions. But it seems the dev at the top of the list had a sense of vanity instead.
You're supposed to generate a random one, but the only consequence of not doing so is that you won't be able to register your package if someone else already took the UUID (which is a pain if you have registered versions in a private registry). That said, "vanity" UUIDs are a bad look, so we'd probably reject them if someone tried that today, but there isn't any actual issue with them.
> if you allow any yahoo to pick a UUID, its not really a UUID
universally unique identifier (UUID)
> 00000000-1111-2222-3333-444444444444
It's unique.
Anyway we're talking about a package that doesn't matter. It's abandoned. Furthermore it's also broken, because it uses REPL without importing it. You can't even precompile it.
This is basically unethical. Imagine anything important in the world that worked this way. "Do nuclear engineering the easy way while it works, and when it stops working, fix the problem."
Software engineers always make the excuse that what they're making now is unimportant, so who cares? But then everything gets built on top of that unimportant thing, and one day the world crashes down. Worse, "fixing the problem" becomes near impossible, because now everything depends on it.
But really the reason not to do it, is there's no need to. There are plenty of other solutions than using Git that work as well or better without all the pitfalls. The lazy engineer picks bad solutions not because it's necessarily easier than the alternatives, but because it's the path of least resistance for themselves.
Not only is this not better, it's often actively worse. But this is excused by the same culture that gave us "move fast and break things". All you have to do is use any modern software to see how that worked out. Slow bug-riddled garbage that we're all now addicted to.
Most of the world does work this way. Problems are solved within certain conditions and for use over a certain time frame. Once those change, the problem gets revisited.
Most software gets to take it to more of an extreme then many engineering fields since there isn't physical danger. Its telling that the counter examples always use the potentially dangerous problems like medicine or nuclear engineering. The software in those fields are more stringent.
On the other hand, GitHub wants to be the place you choose to build your registry for a new project, and they are clearly on board with the idea given that they help massive projects like Nix packages instead of kicking them off.
As opposed to something like using a flock of free blogger.com blogs to host media for an offsite project.
Hold up... "lazy engineers" are the problem here? What about a society that insists on shoving the work product of unfunded, volunteer engineers into critical infrastructure because they don't want to pay what it costs to do things the right way? Imagine building a nuclear power plant with an army of volunteer nuclear engineers.
It cannot be the case that software engineers are labelled lazy for not building the at-scale solution to start with, but at the same time everyone wants to use their work, and there are next to no resources for said engineer to actually build the at scale solution.
> the path of least resistance for themselves.
Yeah because they're investing their own personal time and money, so of course they're going to take the path that is of least resistance for them. If society feels that's "unethical", maybe pony up the cash because you all still want to rely on their work product they are giving out for free.
> If society feels that's "unethical", maybe pony up the cash because you all still want to rely on their work product they are giving out for free.
I like OSS and everything.
Having said that, ethically, should society be paying for these? Maybe that is what should happen. In some places, we have programs to help artists. Should we have the same for software?
Fixing problems as they appear is unethical? Ok then.
You realize, there are people who think differently? Some people would argue that if you keep working on problems you don't have but might have, you end up never finishing anything.
It's a matter of striking a balance, and I think you're way on one end of the spectrum. The vast majority of people using Julia aren't building nuclear plants.
Refusing to fix a problem that hasn't appeared yet, but has been/can be foreseen - that's different. I personally wouldn't call it unethical, but I'd consider it a negative.
“It never works out” - hmm, seems like it worked out just fine, worked great to get the operation of the ground and when scale became an issue it was solvable by moving to something else. It served its purpose, sounds like it worked out to me.
You appear to have glossed over the two projects in the list that are stuck due to architectural decisions, and don't have any route to migrate off of git-as-database?
The issues with nixpkgs stem from that it is a monorepo for all packages and doubling as an index.
The issues are only fundamental with that architecture. Using a separate repo for each package, like the Arch User Repos, does not have the same problems.
When you start out with a store like git, with file system semantics and a client that has to be smart to handle all the compare and merge operations, then it’s practically impossible to migrate a large client base to a new protocol. Takes years lots of user complaints to and random breakage.
Much better to start with an API. Then you can have the server abstract the store and the operations - use git or whatever - but you can change the store later without disrupting your clients.
I couldn't agree more strongly. There is a huge opportunity to make git more effective for this kind of use-case, not to abandon it. The essay in question provides no compelling alternative; it therefore reaches an entirely half-baked conclusion.
The other conclusion to draw is "Git is a fantastic choice of database for starting your package manager, almost all popular package managers began that way."
I think the conclusion is more that package definitions can still be maintained on git/GitHub but the package manager clients should probably rely on a cache/db/a more efficient intermediate layer.
Mostly to avoid downloading the whole repo/resolve deltas from the history for the few packages most applications tend to depend on. Especially in today's CI/CD World.
This is exactly the right approach. I did this for my package manager.
It relies on a git repo branch for stable. There are yaml definitions of the packages including urls to their repo, dependencies, etc. Preflight scripts. Post install checks. And the big one, the signatures for verification. No binaries, rpms, debs, ar, or zip files.
What’s actually installed lives in a small SQLite database and searching for software does a vector search on each packages yaml description.
Semver included.
This was inspired by brew/portage/dpkg for my hobby os.
This is how WinGet works. It has a small SQLite db it downloads from a hosted url. The DB contains some minimal metadata and a url path to access the full metadata. This way WinGet only has to make API calls for packages it's actually interacting with. As a package manager, it has plenty of problems still, but it's a simple, elegant solution for the git as a DB issue.
For the purposes of the article, git isn't just being used as a database, it's being used as a protocol to replicate the database to the client to allow for offline operation and then keep those distributed copies in sync. And even for that purpose you can do better than git if you know what you're doing, but knowledge of databases alone isn't going to help you (let alone make your engineering more economical than relying on free git hosting).
Exactly. It's not just about the best solution to the problem, it's also heavily about the economics around it. If I wanted to create a new package manager today, I could get started by utilizing Git and existing git hosting solutions with very little effort, and effort translates to time, and time is a scarce resource. If you don't know whether your package manager will take off or not, it may not be the best use of your scarce resources to invest in a robust and optimized solution out of the gate. I wish that weren't the case, I would love to have an infinite amount of time, but wishing is not going to make it happen
I think there's a form of survivorship bias at work here. To use the example of Cargo, if Rust had never caught on, and thereby gotten popular enough to inflate the git-based index beyond reason, then it would never have been a problem to use git as the backing protocol for the index. Likewise, we can imagine innumerable smaller projects that successfully use git as a distributed delta-updating data distribution protocol, and never happen to outgrow it.
The point being, if you're not sure whether your project will ever need to scale, then it may not make sense to reinvent the wheel when git is right there (and then invent the solution for hosting that git repo, when Github is right there), letting you spend time instead on other, more immediate problems.
Right, this post may encourage premature optimization. Cargo, Homebrew, et al chose an easy, good-enough solution which allowed them to grow until they hit scaling limits. This is a good problem to have.
I am sure there's value having a vision for what your scaling path might be in the future, so this discussion is a good one. But it doesn't automatically mean that git is a bad place to start.
It’s always humbling when you go on the front page of HN and see an article titled “the thing you’re doing right now is a bad idea and here’s why”
This has happened to me a few times now. The last one was a fantastic article about how PG Notify locks the whole database.
In this particular case it just doesn’t make a ton of sense to change course. Im a solo dev building a thing that may never take off, so using git for plug-in distribution is just a no brainer right now. That said, I’ll hold on to this article in case I’m lucky enough to be in a position where scale becomes an issue for me.
I host my own code repository using Forgejo. It's not public. In fact, it's behind mutual tls like all the service I host. Reason? I don't want to deal with bots and other security risks that come with opening port to the world.
Turns out Go module will not accept package hosted on my Forgejo instance because it asks for certificate. There are ways to make go get use ssh but even with that approach the repository needs to be accessible over https. In the end, I cloned the repository and used it in my project using replace directive. It's really annoying.
If you add .git to the end of your module path and set $GOPRIVATE to the hostname of your Forgejo instance, then Go will not make any HTTPS requests itself and instead delegate to the git command, which can be configured to authenticate with client certificates. See https://go.dev/ref/mod#vcs-find
> There are ways to make go get use ssh but even with that approach the repository needs to be accessible over https.
No, that's false. You don't need anything to be accessible over HTTP.
But even if it did, and you had to use mTLS, there's a whole bunch of ways to solve this. How do you solve this for any other software that doesn't present client certs? You use a local proxy.
It's not just package manager who do this - a lot of smaller projects crowd source data in git repositories. Most of these don't reach the scale where the technical limitations become a problem.
Personally my view is that the main problem when they do this is that it gets much harder for non-technical people to contribute. At least that doesn't apply to package managers, where it's all technical people contributing.
There are a few other small problems - but it's interesting to see that so many other projects do this.
I ended up working on an open source software library to help in these cases: https://www.datatig.com/
Here's a write up of an introduction talk about it: https://www.datatig.com/2024/12/24/talk.html I'll add the scale point to future versions of this talk with a link to this post.
This seems to be about hosting an Sqlite database on a static website like GitHub Pages - this can be a great plan, there is also Datasette in a browser now: https://github.com/simonw/datasette-lite
But that's different from how you collect the data in a git repository in the first place - or are you suggesting just putting a Sqlite file in a git repository? If so I can think of one big reason against that.
Yes, I'm suggesting hosting it on GitHub, leveraging their git lfs support. Just treat it like a binary blob and periodically update with a tagged release.
It's not clear if you are suggesting accepting contributions to the SQLite file via PR from people (but accepting contributions is generally the point of why people put these on projects on GitHub).
But if you are I wouldn't recommend it.
PR's won't be able to show diff's. Worse, as soon as multiple people send a PR at once you'll have a really painful merge to resolve, and GitHub's tools won't help you at all. And you can't edit the files in GitHub's web UI.
I recommend one file per record, JSON, YAML, whatever non-binary format you want. But then you get:
* PR's with diff's that show you what's being changed
* Files that technical people can edit directly in GitHub's web editor
* If 2 people make PR's on different records at once it's an easy merge with no conflicts
* If 2 people make PR's on the same record at once ... ok, you might now have a merge conflict to resolve but it's in an easy text file and GitHub UI will let you see what it is.
You can of course then compile these data files into a SQLite file that can be served in a static website nicely - in fact if you see my other comments on this post I have a tool that does this. And on that note, sorry, I've done a few projects in this space so I have views :-)
It also seems like it's not git that's emitting scary creaks and groans, but rather GitHub. As much as it would be a bummer to forgo some of GitHub's nice-to-have features, I expect we could survive without some of it.
Furthermore, the issues given for nixpkgs are actually demonstrating the success of using git as the database! Those 20k forks are all people maintaining their own version of nixpkgs on Github, right? Each their own independent tree that users can just go ahead and modify for their own whims and purposes, without having to overcome the activation energy of creating their own package repository.
If 83GB (4MB/fork) is "too big" then responsibility for that rests solely on the elective centralization encouraged by Github. I suspect if you could go and total up the cumulative storage used by the nixpkgs source tree distributed on computers spread throughout the world, that is many orders of magnitude larger.
In a compressed format, later commits would be added as a delta of some kind, to avoid increasing the size by the whole tree size each time. To make shallow clones efficient you'd need to rewrite the compressed form such that earlier commits are instead deltas on later ones, or something equivalent.
Admittedly, I try and stay away from database design whenever possible at work. (Everything database is legacy for us) But the way the term is being used here kinda makes me wonder, do modern sql databases have enough security features and permissions management systems in place that you could just directly expose your database to the world with a "guest" user that can only make incredibly specific queries?
Cut out the middle man, directly serve the query response to the package manager client.
(I do immediately see issues stemming from the fact that you cant leverage features like edge caching this way, but I'm not really asking if its a good solution, im more asking if its possible at all)
There are still no realistic ways to expose a hosted SQL solution to the public without really unhappy things occurring. It doesn't matter which vendor you pick.
Anything where you are opening a TCP connection to a hosted SQL server is a non-starter. You could hypothetically have so many read replicas that no one could blow anyone else up, but this would get to be very expensive at scale.
Something involving SQLite is probably the most viable option.
I personally think that this is the future, especially since such an architecture allows for E2E encryption of the entire database.
The protocol should just be a transaction layer for coordinating changes of opaque blobs.
All of the complexity lives on the client.
That makes a lot of sense for a package manager because it's something lots of people want to run, but no one really wants to host.
So what's the answer then? That's the question I wanted answered after reading this article. With no experience with git or package management, would using a local client sqlite database and something similar on the server do?
OCI artifacts, using the same protocol as container registries. It's a protocol designed for versioning (tagging) content addressable blobs, associating metadata with them, and it's CDN friendly.
Homebrew uses OCI as its backend now, and I think every package manager should. It has the right primitives you expect from a registry to scale.
The Cargo example at the top is striking. Whenever I publish a crate, and it blocks me until I write `--allow-dirty`, I am reminded that there is a conflation between Cargo/crates.io and Git that should not exist. I will write `--allow-dirty` because I think these are two separate functionalities that should not be coupled. Crates.io should not know about or care about my project's Git usage or lack thereof.
I think git is overkill, and probably a database is as well.
I quite like the hackage index, which is an append-only tar file. Incremental updates are trivial using HTTP range requests making hosting it trivial as well.
The facts are interesting but the conclusion a bit strange. These package managers have succeeded because git is better for the low trust model and GitHub has been hosting infra for free that no one in their right mind would provide for the average DB.
If it didn't work we would not have these massive ecosystems upsetting GitHub's freemium model, but anything at scale is naturally going to have consequences and features that aren't so compatible with the use case.
Uncertain if this is OT, but given that the CCC is politically inspired organization, I hope not:
One thing that still seems absent is awareness of the complete takeover of "gadgets" in schools. Schools these days, as early as primary school, shove screens in front of children. They're expected to look at them, and "use" them for various activities, including practicing handwriting. I wish I was joking [1].
I see two problems with this.
First is that these devices are engineered to be addictive by way of constant notifications/distractions, and learning is something that requires long sustained focus. There's a lot of data showing that under certain common circumstances, you do worse learning from a screen than from paper.
Second is implicitly it trains children to expect that anything has to be done through a screen connected to a closed point-and-click platform. (Uninformed) people will say "people who work with computers make money, so I want my child to have an ipad". But interacting with a closed platform like an ipad is removing the possibilities and putting the interaction "on rails". You don't learn to think, explore and learn from mistakes, instead you learn to use the app that's put in front of you. This in turn reinforces the "computer says no" [2] approach to understanding the world.
I think this is a matter of civil rights and freedom, but sadly I don't often see "civil rights" organizations talk about this. I think I heard Stallman say something along these lines once, but other than that I don't see campaigns anywhere.
What made git special & powerful from the start was its data model: Like the network databases of old, but embedded in a Merkle tree for independent evolution and verifiability.
Scaling that data model beyond projects the size of the Linux kernel was not critical for the original implementation. I do wonder if there are fundamental limits to scaling the model for use cases beyond “source code management for modest-sized, long-lived projects”.
Most of the problems mentioned in the article are not problems with using a content-addressed tree like git or even with using precisely git’s schema. The problems are with git’s protocol and GitHub’s implementation thereof.
Consider vcpkg. It’s entirely reasonable to download a tree named by its hash to represent a locked package. Git knows how to store exactly this, but git does not know how to transfer it efficiently.
> Git knows how to store [a hash-addressed tree], but git does not know how to transfer it efficiently.
Naïvely, I’d expect shallow clones to be this, so I was quite surprised by a mention of GitHub asking people not to use them. Perhaps Git tries too hard to make a good packfile?..
Meanwhile, what Nixpkgs does (and why “release tarballs” were mentioned as a potential culprit in the discussion linked from TFA) is request a gzipped tarball of a particular commit’s files from a GitHub-specific endpoint over HTTP rather than use the Git protocol. So that’s already more or less what you want, except even the tarball is 46 MB at this point :( Either way, I don’t think the current problems with Nixpkgs actually support TFA’s thesis.
As far as I know, Nixpkgs doesn't use git as a package database. The packages definitions are stored and developed in git, but the channels certainly are not.
And this my friends is the reason why (only) focusing on CPU cycles and memory hierarchies is insufficient when thinking of the performance of a system. Yes they are important. But no level of low-level optimization will get you out of the hole that a wrong choice of algorithm and/or data structure may have dug you into.
I also got the same feeling from that, in fact, I would go as far as to say that nixpkgs and nix-commands integration with git works quite well and is not an issue.
So the phrase the article says
"Package managers keep falling for this. And it keeps not working out"
I feel that's untrue.
The most issue I have with this really is "flakes" integration where the whole recipe folder is copied into the store (which doesn't happen with non-flakes commands), but that's a tooling problem not an intrinsic problem of using git
If we stopped using VCS to fetch source files, we would lose the ability to get the exact commit(understand as version that has nothing to do with the underlying VCS) of these files. Git, Mercurial, SVN.., github, bitbucket...it does not matter. Absolutely nobody will be building downloadable versions of their source files, hosted on who knows how "prestigious" domains, by copying them to another location just to serve the --->exact same content<--- that github and alike already provide.
This entire blog is just a waste of time for anyone reading it.
you could, in case you want to make only certain releases publicly available. but then, who wants to do that manual labour? we're talking mainstream here, not specific use cases.
Not in the same sense. An analogy might be: apt is like fetching a git repo in which all the packages are submodules, so lazily fetched. Some of the package managers in the article seem to be using a monorepo for all packages - including the content. Others seem to have different issues - go wasn't including enough information in the top level, so all the submodules had to be fetched anyway. vcpkg was doing something with tree hashes which meant they weren't really addressible.
That it supports fetching via Git as well as various via forge-specific tarballs, even for flakes, is pretty nice. It means that it your org uses Nix, you can fall back to distribution via Git as a solution that doesn't require you to stand up any new infra or tie you to any particular vendor, but once you get rolling it's an easy optimization to switch to downloading snapshots.
The most pain probably just becomes from the hugeness of Nixpkgs, but I remain an advocate for the huge monorepo of build recipes.
Yes agreed. It’s possible to imagine some kind of cached-deltas scheme to get faster/smaller updates, but I suspect the folks who would have to build and maintain that are all on gigabit internet connections and don’t feel the complexity is worth it.
> Grab’s engineering team went from 18 minutes for go get to 12 seconds after deploying a module proxy. That’s not a typo. Eighteen minutes down to twelve seconds.
> The problem was that go get needed to fetch each dependency’s source code just to read its go.mod file and resolve transitive dependencies. Cloning entire repositories to get a single file.
I have also had inconsistent performance with go get. Never enough to look closely at it. I wonder if I was running into the same issue?
> needed to fetch each dependency’s source code just to read its go.mod file and resolve transitive dependencies.
Python used to have this problem as well (technically still does, but a large majority of things are available as a wheel and PyPI generally publishes a separate .metadata file for those wheels), but at least it was only a question of downloading and unpacking an archive file, not cloning an entire repo. Sheesh.
Why would Go need to do that, though? Isn't the go.mod file in a specific place relative to the package root in the repo?
I'd add git gemfile dependencies to the list of languages called out here as well. It supports git repos, but in general it's a bad idea unless you are diligent with git tag use and disallow git tag mutability, which also assumes you have complete control of your git dependencies...
One of the first things I did at my current place of employment was to detangle the mess of gemfile git dependencies and get them to adopt semver and an actual package repo. There were so many footguns with git dependencies in ruby we were getting taken down by friendly fire on the daily...
Not sure I can agree with the takeaway. It works well at first, but doesn’t scale, so folks found workarounds. That’s how literally every working system grows. There are always bottlenecks eventually. And you address them when they become an issue, not five years earlier.
I want to take a quick detour here if anyone is knowledgeable about this topic.
> The hosting problems are symptoms. The underlying issue is that git inherits filesystem limitations, and filesystems make terrible databases.
Does this mean mbox is inherently superior to maildir? I really like the idea of maildir because there is nothing to compact but if we assume we never delete emails (on the local machine anyways), does that mean mbox or similar is preferable over maildir?
The article conclusion is just... not good. There are many benefits to using Git as backend, you can point your project to every single commit as a version which makes testing any fixes or changes in libs super easy, it has built in integrity control and technically (sadly not in practice) you could just sign commits and use that to verify whether package is authentic.
It being unoptimal bandwidth wise is frankly just a technical hurdle to get over it, with benefits well worth the drawback
Loved this article. Just enough detail to make the broad scope compatible with a reasonable length, and well-argued.
I feel sometimes like package management is a relatively second-class topic in computer science (or at least among many working programmers). But a package manager's behavior can be the difference between a grotesque, repulsive experience and a delightful, beautiful one. And there aren't quite yet any package managers that do well everything that we collectively have learned how to do well, which makes it an interesting space imo.
Re: Nixpkgs, interestingly, pre-flakes Nix distributes all of the needed Nix expressions as tarballs, which does play nice with CDNs. It also distributes an index of the tree as a SQLite database to obviate some of the "too many files/directories" problem with enumerating files. (In the meantime, Nixpkgs has also started bucketing package directories by name prefix, too.) So maybe there was a lesson learned here that would be useful to re-learn.
On the other hand, IIRC if you use the GitHub fetcher rather than the Git one, including for fetching flakes, Nix will download tarballs from GitHub instead of doing clones. Regardless, downloading and unpacking Nixpkgs has become kinda slow. :-\
The article lists Git-based wiki engines as a bad usage of Git. Can anybody recommend alternatives? I want something that can be self-hosted, is easily modified by text editors, and has individual page history, preferably with Markdown.
We wanted to pull updated code in our undockerized instances when they were instantiated, so we decided to pull the code from GitHub. Worked out pretty well though after a thousand trials we got a 502 and now we're one step closer to being forced into a CD pipeline.
The conclusion reached in this essay is 100% wrong. See " The reftable backend What it is, where it's headed, and why should you care?"
>With release 2.45, Git has gained support for the “reftable” backend to read and write references in a Git repository. While this was a significant milestone for Git, it wasn‘t the end of GitLab’s journey to improve scalability in repositories with many references. In this talk you will learn what the reftable backend is, what work we did to improve it even further and why you should care.
It’s basically the same thing that always happens when you choose a technology because it’s convenient rather than a great fit for your problem. Sooner or later, you’ll hit a wall.
Just because you can cook a salmon in your dishwasher doesn’t mean you should.
I feel like the rqlite people would have a lot to say about how to coordinate your installations, especially for the high-bandwidth non-desktop installs.
As side note. Maybe someone knows, why rust devs chose an already used name for language changes proposal? "RFC" was already taken and well-established and I simply refuse to accept that someone wasn't aware about Request For Comments - and if it was true and clash was created deliberately, then it was rude and arrogant.
Every, ...king time, when I read something like "RFC 2789 introduced a sparse HTTP protocol." my brain suffers from a short-circuit. BTW: RFC 2789 is a "Mail Monitoring MIB".
But they were in different domains. Here, we have a strong clash because Rust is positioning itself as secure system and internet language and computer and internet standard are already defined by RFC-s. So, it may be not uncommon, when someone would tell about Rust mechanisms, defined by particular RFC in context of handling particular protocol, defined by... well... RFC too. But not by rust-one.
Not so smart, when we realize, that one of aspects of secure and reliable system is elimination of ambiguities.
My favorite hill to die on (externality) is user time. Most software houses spend so much time focusing on how expensive engineering time is that they neglect user time. Software houses optimize for feature delivery and not user interaction time. Yet if I spent one hour making my app one second faster for my million users, I can save 277 user hour per year. But since user hours are an externality, such optimization never gets done.
Externalities lead to users downloading extra gigabytes of data (wasted time) and waiting for software, all of which is waste that the developer isn't responsible for and doesn't care about.
I don’t know what you mean by software houses, but every consumer facing software product I’ve worked on has tracked things like startup time and latency for common operations as a key metric
This has been common wisdom for decades. I don’t know how many times I’ve heard the repeated quote about how Amazon loses $X million for every Y milliseconds of page loading time, as an example.
> Helldivers 2 devs slash install size from 154GB to 23GB
https://news.ycombinator.com/item?id=46134178
Section of the top comment says,
> It seems bizarre to me that they'd have accepted such a high cost (150GB+ installation size!) without entirely verifying that it was necessary!
and the reply to it has,
> They’re not the ones bearing the cost. Customers are.
Gamedev is very different from other domains, being in the 90th percentile for complexity and codebase size, and the 99th percentile for structural instability. It's a foregone conclusion that you will rewrite huge chunks of your massive codebase many, many times within a single year to accomidate changing design choices, or if you're lucky, to improve an abstraction. Not every team gets so lucky on every project. Launch deadlines are hit when there's a huge backlog of additional stuff to do, sitting atop a mountain of cut features.
The inverse, however, is bizarre. That they spent potentially quite a bit of engineering effort implementing the (extremely non-optimal) system that duplicates all the assets half a dozen time to potentially save precious seconds on spinning rust - all without validating it was worth implementing in the first place.
They talk about it being an optimization. They also talk about the bottleneck being level generation, which happens at the same time as loading from disk.
And Skylines rendering teeth on models miles away https://www.reddit.com/r/CitiesSkylines/comments/17gfq13/the...
Sometimes the performance is really ignored.
Are you sure that you’re not the driving force behind those metrics; or that you’re not self-selecting for like-minded individuals?
I find it really difficult to convince myself that even large players (Discord) are measuring startup time. Every time I start the thing I’m greeted by a 25s wait and a `RAND()%9` number of updates that each take about 5-10s.
A large part of my friend group use discord as the primary method of communication, even in an in person context (was at a festival a few months ago with a friend, and we would send texts over discord if we got split up) so maybe its not a common use case.
>>what you mean by software houses
How about Microsoft? Start menu is a slow electron app.
If your users are trapped due to a lack of competition then this can definitely happen.
This is true for sites that are trying to make sales. You can quantify how much a delay affects closing a sale.
For other apps, it’s less clear. During its high-growth years, MS Office had an abysmally long startup time.
Maybe this was due to MS having a locked-in base of enterprise users. But given that OpenOffice and LibreOffice effectively duplicated long startup times, I don’t think it’s just that.
You also see the Adobe suite (and also tools like GIMP) with some excruciatingly long startup times.
I think it’s very likely that startup times of office apps have very little impact on whether users will buy the software.
Are they evaluating the shape of that line with the same goal as the stonk score? Time spent by users is an "engagement" metric, right?
Commons would be if it's owned by nobody and everyone benefits from its existence.
This isn’t what “commons” means in the term ‘tragedy of the commons’, and the obvious end result of your suggestion to take as much as you can is to cause the loss of access.
Anything that is free to use is a commons, regardless of ownership, and when some people use too much, everyone loses access.
Finite digital resources like bandwidth and database sizes within companies are even listed as examples in the Wikipedia article on Tragedy of the Commons. https://en.wikipedia.org/wiki/Tragedy_of_the_commons
The behavior that you warn against is that of a free rider that make use of a positive externality of GitHub’s offering.
“Commons can also be defined as a social practice of governing a resource not by state or market but by a community of users that self-governs the resource through institutions that it creates.”
https://en.wikipedia.org/wiki/Commons
The actual mechanism by which ownership resolves tragedy of the commons scenarios is by making the resource non-free, by either charging, regulating, or limiting access. The effect still occurs when something is owned but free, and its name is still ‘tragedy of the commons’, even when the resource in question is owned by private interests.
Edit: oh, I do see what you mean, and yes I misunderstood the quote I pulled from WP - it’s talking about non-ownership. I could pick a better example, but I think that’s distracting from the fact that ‘tragedy of the commons’ is a term that today doesn’t depend on the definition of the word ‘commons’. It’s my mistake to have gotten into any debate about what “commons” means, I’m only saying today’s usage and meaning of the phrase doesn’t depend on that definition, it’s a broader economic concept.
The idea of the tragedy of the commons relies on this feedback loop of having these unsustainably growing herds (growing because they can exploit the zero-cost-to-them resources of the commons). Feedback loops are notoriously sensitive to small parameter changes. MS could presumably impose some damping if they wanted.
Not linearity but continuity, which I think is a well-founded assumption, given that it's our categorization that simplifies the world by drawing sharp boundaries where no such bounds exist in nature.
> The idea of the tragedy of the commons relies on this feedback loop of having these unsustainably growing herds (growing because they can exploit the zero-cost-to-them resources of the commons)
AIUI, zero-cost is not a necessary condition, a positive return is enough. Fishermen still need to buy fuel and nets and pay off loans for the boats, but as long as their expected profit is greater than that, they'll still overfish and deplete the pond, unless stronger external feedback is introduced.
Given that the solution to tragedy of the commons is having the commons owned by someone who can boss the users around, GitHub being owned by MS makes it more of a commons in practice, not less.
But I would make the following clarifications:
1. A private entity is still the steward of the resource and therefore the resource figures into the aims, goals, and constraints of the private entity.
2. The common good is itself under the stewardship of the state, as its function is guardian of the common good.
3. The common good is the default (by natural law) and prior to the private good. The latter is instituted in positive law for the sake of the former by, e.g., reducing conflict over goods.
I think it's both simpler and deeper than that.
Governments and corporations don't exist in nature. Those are just human constructs, mutually-recursive shared beliefs that emulate agents following some rules, as long as you don't think too hard about this.
"Tragedy of the commons" is a general coordination problem. The name itself might've been coined with some specific scenarios in mind, but for the phenomenon itself, it doesn't matter what kind of entities exploit the "commons"; the "private" vs. "public" distinction itself is neither a sharp divide, nor does it exist in nature. All that matters is that there's some resource used by several independent parties, and each of them finds it more beneficial to defect than to cooperate.
In a way, it's basically a 3+-player prisonner's dilemma. The solution is the same, too: introducing a party that forces all other parties to cooperate. That can be a private or public or any other kind of org taking ownership of the commons and enforcing quotas, or in case of prisonners, a mob boss ready to shoot anyone who defects.
The "tragedy", if you absolutely need to find one, is only for unrestricted, free-for-all commons, which is obviously a bad idea.
But that doesn't mean the tragedy of the commons can't happen in other scenarios. If we define commons a bit more generously it does happen very frequently on the internet. It's also not difficult to find cases of it happening in larger cities, or in environments where cutthroat behavior has been normalized
That works while the size of the community is ~100-200 people, when everyone knows everyone else personally. It breaks down rapidly after that. We compensate for that with hierarchies of governance, which give rise to written laws and bureaucracy.
New tribes break off old tribes, form alliances, which form larger alliances, and eventually you end up with countries and counties and vovoidships and cities and districts and villages, in hierarchies that gain a level per ~100x population increase.
This is sociopolitical history of the world in a nutshell.
You say it like this is a law set in stone, because this is what happened im history, but I would argue it happened under different conditions.
Mainly, the main advantage of an empire over small villages/tribes is not at all that they have more power than the villages combined, but that they can concentrate their power where it is needed. One village did not stand a chance against the empire - and the villages were not coordinated enough.
But today we would have the internet for better communication and coordination, enabling the small entieties to coordinate a defense.
Well, in theory of course. Because we do not really have autonomous small states, but are dominated by the big players. And the small states have mowtly the choice which block to align with, or get crushed. But the trend might go towards small again.
(See also cheap drones destroying expensive tanks, battleships etc.)
NETWORK effect is a real thing
Yet we regularly observe that working with millions of people; we take care of our young, we organize, when we see that some action hurt our environment we tend to limit its use.
It's not obvious why some societies break down early and some go on working.
That's more like human universals. These behaviors generally manifest to smaller or larger degree, depending on how secure people feel. But those are extremely local behaviors. And in fact, one of them is exactly the thing I'm talking about:
> we organize
We organize. We organize for many reasons, "general living" is the main one but we're mostly born into it today (few got the chance to be among the founding people of a new village, city or country). But the same patterns show up in every other organizations people create, from companies to charities, from political interests groups to rural housewives' circles -- groups that grow past ~100 people split up. Sometimes into independent groups, sometimes into levels of hierarchies. Observe how companies have regional HQs and departments and areas and teams; religious groups have circuits and congregations, etc. Independent organizations end up creating joint ventures and partnerships, or merge together (and immediately split into a more complex internal structure).
The key factor here is, IMO, for everyone in a given group to be in regular contact with everyone else. Humans are well evolved for living in such small groups - we come with built-in hardware and software to navigate complex interpersonal situations. Alignment around shared goals and implicit rules is natural at this scale. There's no space for cheaters and free-loaders to thrive, because everyone knows everyone else - including the cheater and their victims. However, once the group crosses this "we're all a big family, in it together" size, coordinating everyone becomes hard, and free-loaders proliferate. That's where explicit laws come into play.
This pattern repeats daily, in organizations people create even today.
But if a significant fraction of the population is barely scraping by then they're not willing to be "good" if it means not making ends meet, and when other people see widespread defection, they start to feel like they're the only one holding up their end of the deal and then the whole thing collapses.
This is why the tendency for people to propose rent-seeking middlemen as a "solution" to the tragedy of the commons is such a diabolical scourge. It extracts the surplus that would allow things to work more efficiently in their absence.
It’s easier to explain in those terms than assumptions about how things work in a tribe.
Commons can fail, but the whole point of Hardin calling commons a "tragedy" is to suggest it necessarily fails.
Compare it to, say, driving. It can fail too, but you wouldn't call it "the tragedy of driving".
We'd be much better off if people didn't throw around this zombie term decades after it's been shown to be unfounded.
No it does not. This sentiment, which many people have, is based on a fictional and idealistic notion of what small communities are like having never lived in such communities.
Empirically, even in high-trust small villages and hamlets where everyone knows everyone, the same incentives exist and the same outcomes happen. Every single time. I lived in several and I can't think of a counter-example. People are highly adaptive to these situations and their basic nature doesn't change because of them.
Humans are humans everywhere and at every scale.
Nonetheless, the concept is still alive, and anthropic global warming is here to remind you about this.
Communal management of a resource is still government, though. It just isn’t central government.
The thesis of the tragedy of the commons is that an uncontrolled resource will be abused. The answer is governance at some level, whether individual, collective, or government ownership.
> The "tragedy", if you absolutely need to find one, is only for unrestricted, free-for-all commons, which is obviously a bad idea.
Right. And that’s what people are usually talking about when they say “tragedy of the commons”.
This is of course a false dichotomy because governance can be done at any level.
Let's Encrypt is a solid example of something you could reasonably model as "tragedy of the commons" (who is going to maintain all this certificate verification and issuance infrastructure?) but then it turns out the value of having it is a million times more than the cost of operating it, so it's quite sustainable given a modicum of donations.
Free software licenses are another example in this category. Software frequently has a much higher value than development cost and incremental improvements decentralize well, so a license that lets you use it for free but requires you to contribute back improvements tends to work well because then people see something that would work for them except for this one thing, and it's cheaper to add that themselves or pay someone to than to pay someone who has to develop the whole thing from scratch.
The jerks get their free things for a while, then it goes away for everyone.
And out of curiosity, aside from costing more for some people, what’s worse exactly? I’m not a heavy GitHub user, but I haven’t really noticed anything in the core functionality that would justify calling it enshittified.
Probably the worst thing MS did was kill GitHub’s nascent CI project and replace it with Azure DevOps. Though to be fair the fundamental flaws with that approach didn’t really become apparent for a few years. And GitHub’s feature development pace was far too slow compared to its competitors at the time. Of course GitHub used to be a lot more reliable…
Now they’re cramming in half baked AI stuff everywhere but that’s hardly a MS specific sin.
MS GitHub has been worse about DMCA and sanctioned country related takedowns than I remember pre acquisition GitHub being.
Did I miss anything?
As for how the site has become worse, plenty of others have already done a better job than I could there. Other people haven't noticed or don't care and that's ok too I guess.
Remember how GTA5 took 10 minutes to start and nobody cared? Lots of software is like this.
Some Blizzard games download 137 MB file every time you run them and take few minutes to start (and no, this is not due to my computer).
Google and amazon are famous for optimizing this. Its not an externality to them though, even 10s of ms can equal an extra sale.
That said, i don't think its fair to add time up like that. Saving 1 second for 600 people is not the same as saving 10 minutes for 1 person. Time in small increments does not have the same value as time in large increments.
2. Monopolies and situations with the principal/agent dilemma are less sensitive to such concerns.
An externality is usually a cost you don't pay (or pay only a negligible amount of). I don't see how pricing it helps justify optimizing it.
> "The Macintosh boots too slowly. You've got to make it faster!"
This is what people mean about speed being a feature. But "user time" depends on more than the program's performance. UI design is also very important.
The number of companies that have this much respect for the user is vanishingly small.
I think companies shifted to online apps because #1 it solved the copy protection problem. FOSS apps are not in any hurry to become centralized because they dont care about that issue.
Local apps and data are a huge benefit of FOSS and I think every app website should at least mention that.
"Local app. No ads. You own your data."
Native software being an optimum is mostly an engineer fantasy that comes from imagining what you can build.
In reality that means having to install software like Meta’s WhatsApp, Zoom, and other crap I’d rather run in a browser tab.
I want very little software running natively on my machine.
Yes, there are many cases when condoms are indicative of respect between parties. But a great many people would disagree that the best, most respectful relationships involve condoms.
> Meta
Does not sell or operate respectful software. I will agree with you that it's best to run it in a browser (or similar sandbox).
I think this is sad.
I know the browser is convenient, but frankly, its been a horror show of resource usage and vulnerabilities and pathetic performance
The idea that somehow those companies would respect your privacy were they running a native app is extremely naive.
We can already see this problem on video games, where copy protection became resource-heavy enough to cause performance issues.
What is the probability of it being used? About 0%, right? Because git is proven and GitHub is free. Engineering aspects are less important.
First argument would be - take at least two 0's from your estimation, most of applications will have maybe thousands of users, successful ones will maybe run with 10's of thousands. You might get lucky to work on application that has 100's of thousands, millions of users and you work in FAANG not a typical "software house".
Second argument is - most users use 10-20 apps in typical workday, your application is most likely irrelevant.
Third argument is - most users would save much more time learning how to use applications (or to use computer) properly they use on daily basis, than someone optimizing some function from 2s to 1s. But of course that's hard because they have 10-20 apps daily plus god know how many other not on daily basis. Though still I see people doing super silly stuff in tools like Excel or even not knowing copy paste - so not even like any command line magic.
This is perfectly sensible behavior when the developers are working for free, or when the developers are working on a project that earns their employer no revenue. This is the case for several of the projects at issue here: Nix, Homebrew, Cargo. It makes perfect sense to waste the user's time, as the user pays with nothing else, or to waste Github's bandwidth, since it's willing to give bandwidth away for free.
Where users pay for software with money, they may be more picky and not purchase software that indiscriminately wastes their time.
Windows 11 should not be more sluggish than Windows 7.
https://www.folklore.org/Saving_Lives.html
I have never been convinced by this argument. The aggregate number sounds fantastic but I don't believe that any meaningful work can be done by each user saving 1 second. That 1 second (and more) can simply be taken by me trying to stretch my body out.
OTOH, if the argument is to make software smaller, I can get behind that since it will simply lead to more efficient usage of existing resources and thus reduce the environmental impact.
But we live in a capitalist world and there needs to be external pressure for change to occur. The current RAM shortage, if it lasts, might be one of them. Otherwise, we're only day dreaming for a utopia.
Not all of those externalizing companies abuse your time but whatever they abuse can be expressed in a $ amount and $ can be converted to a median's person time via median wage. Hell, free time is more valuable than whatever you produce during work.
Say all that boils down to companies collectively stealing 20 minutes of your time each day. 140 minutes each week. 7280 (!) minutes each year, which is 5.05 days, which makes it almost a year over the course of 70 years.
So yeah, don't do what you do and sweettalk the fact that companies externalize costs (private the profits, socialize the losses). They're sucking your blood.
A high usage one, absolutely improve the time of it.
Loading the profile page? Isn't done often so not really worth it unless it's a known and vocal issue.
https://xkcd.com/1205/ gives a good estimate.
The article mentions that most of these projects did use GitHub as a central repo out of convenience so there’s that but they could also have used self-hosted repos.
I've looked into self hosting and git repo that has horizontal scalability, and it is indeed very difficult. I don't have the time to detail it in a comment here, but for anyone who is curious it's very informative to look at how GitLab handled this with gitaly. I've also seen some clever attempts to use object storage, though I haven't seen any of those solutions put heavily to the test.
I'd love to hear from others about ideas and approaches they've heard about or tried
https://gitlab.com/gitlab-org/gitaly
Explain to me how you self-host a git repo without spending any money and having no budget which is accessed millions of time a day from CI jobs pulling packages.
https://clickpy.clickhouse.com/dashboard/numpy
Oh no no no. Consumer-facing companies will burn 30% of your internal team complexity budget on shipping the first "frame" of your app/website. Many people treat Next as synonymous with React, and Next's big deal was helping you do just this.
The answer is in TFA:
> The underlying issue is that git inherits filesystem limitations, and filesystems make terrible databases.
because it's bad at this job, and sqlite is also free
this isn't about "externalities"
You can implement entire features with 10 cents of tokens.
Companies which dont adapt will be left behind this year.
Anyone working in government, banking, or healthcare is still out of luck since the likes of Claude and GPT are (should be) off limits.
Also in case you're not aware, accusing people of shilling or astroTurfing is against the hacker news guidelines
Unfortunately, when you’re starting out, the idea of running a registry is a really tough sell. Now, on top of the very hard engineering problem of writing the code and making a world class tool, plus the social one of getting it adopted, I need to worry about funding and maintaining something that serves potentially a world of traffic? The git solution is intoxicating through this lense.
Fundamentally, the issue is the sparse checkouts mentioned by the author. You’d really like to use git to version package manifests, so that anyone with any package version can get the EXACT package they built with.
But this doesn’t work, because you need arbitrary commits. You either need a full checkout, or you need to somehow track the commit a package version is in without knowing what hash git will generate before you do it. You have to push the package update and then push a second commit recording that. Obviously infeasible, obviously a nightmare.
Conan’s solution is I think just about the only way. It trades the perfect reproduction for conditional logic in the manifest. Instead of 3.12 pointing to a commit, every 3.x points to the same manifest, and there’s just a little logic to set that specific config field added in 3.12. If the logic gets too much, they let you map version ranges to manifests for a package. So if 3.13 rewrites the entire manifest, just remap it.
I have not found another package manager that uses git as a backend that isn’t a terrible and slow tool. Conan may not be as rigorous as Nix because of this decision but it is quite pragmatic and useful. The real solution is to use a database, of course, but unless someone wants to wire me ten thousand dollars plus server costs in perpetuity, what’s a guy supposed to do?
Every package has its own git repository which for binary packages contains mostly only the manifest. Sources and assets, if in git, are usually in separate repos.
This seems to not have the issues in the examples given so far, which come from using "monorepos" or colocating. It also avoids the "nightmare" you mention since any references would be in separate repos.
The problematic examples either have their assets and manifests colocated, or use a monorepo approach (colocating manifests and the global index).
That can be moved elsewhere / mirrored later if needed, of course. And the underlying data is still in git, just not actively used for the API calls.
It might also be interesting to look at what Linux distros do, like Debian (salsa), Fedora (Pagure), and openSUSE (OBS). They're good for this because their historic model is free mirrors hosted by unpaid people, so they don't have the compute resources.
However a lot of the "data in git repositories" projects I see don't have any such need, and then ...
> Why not just have a post-commit hook render the current HEAD to static files, into something like GitHub Pages?
... is a good plan. Usually they make a nice static website with the data that's easy for humans to read though.
So you need a decentralized database? Those exist (or you can make your own, if you're feeling ambitious), probably ones that scale in different ways than git does.
It’s really important that someone is able to search for the manifest one of their dependencies uses for when stuff doesn’t work out of the box. That should be as simple as possible.
I’m all ears, though! Would love to find something as simple and good as a git registry but decentralized
0: https://github.com/mesonbuild/wrapdb/tree/master/subprojects
> The problem was that go get needed to fetch each dependency’s source code just to read its go.mod file and resolve transitive dependencies.
This article is mixing two separate issues. One is using git as the master database storing the index of packages and their versions. The other is fetching the code of each package through git. They are orthogonal; you can have a package index using git but the packages being zip/tar/etc archives, you can have a package index not using git but each package is cloned from a git repository, you can have both the index and the packages being git repositories, you can have neither using git, you can even not have a package index at all (AFAIK that's the case for Go).
Julia does the same thing, and from the Rust numbers on the article, Julia has about 1/7th the number of packages that Rust does[1] (95k/13k = 7.3).
It works fine, Julia has some heuristics to not re-download it too often.
But more importantly, there's a simple path to improve. The top Registry.toml [1] has a path to each package, and once donwloading everything proves unsustainable you can just download that one file and use it to download the rest as needed. I don't think this is a difficult problem.
[1] https://github.com/JuliaRegistries/General/blob/master/Regis...
[1] https://github.com/JuliaRegistries/General
[2] https://pkgdocs.julialang.org/dev/protocol/
Another way to phrase this mindset is "fuck around and find out" in gen-Z speak. It's usually practical to an extent but I'm personally not a fan
Building on the same thing people use for code doesn't seem stupid to me, at least initially. You might have to migrate later if you're successful enough, but that's not a sign of bad engineering. It's just building for where you are, not where you expect to be in some distant future
This is too naive. Fixing the problem costs a different amount depending on when you do it. The later you leave it the more expensive it becomes. Very often to the point where it is prohibitively expensive and you just put up with it being a bit broken.
This article even has an example of that - see the vcpkg entry.
... Should it be concerning that someone was apparently able to engineer an ID like that?
Right now I don't see the problem because the only criterion for IDs is that they are unique.
Apparently it is the former, and most developers independently generate random IDs because it's easy and is extremely unlikely to result in collisions. But it seems the dev at the top of the list had a sense of vanity instead.
https://en.wikipedia.org/wiki/Universally_unique_identifier
> 00000000-1111-2222-3333-444444444444
This would technically be version 2, which would be built from the date-time and MAC address, and DCE security version.
But overall, if you allow any yahoo to pick a UUID, its not really a UUID, its just some random string that looks like one.
universally unique identifier (UUID)
> 00000000-1111-2222-3333-444444444444
It's unique.
Anyway we're talking about a package that doesn't matter. It's abandoned. Furthermore it's also broken, because it uses REPL without importing it. You can't even precompile it.
https://github.com/pfitzseb/REPLTreeViews.jl/blob/969f04ce64...
https://devblogs.microsoft.com/oldnewthing/20120523-00/?p=75...
Software engineers always make the excuse that what they're making now is unimportant, so who cares? But then everything gets built on top of that unimportant thing, and one day the world crashes down. Worse, "fixing the problem" becomes near impossible, because now everything depends on it.
But really the reason not to do it, is there's no need to. There are plenty of other solutions than using Git that work as well or better without all the pitfalls. The lazy engineer picks bad solutions not because it's necessarily easier than the alternatives, but because it's the path of least resistance for themselves.
Not only is this not better, it's often actively worse. But this is excused by the same culture that gave us "move fast and break things". All you have to do is use any modern software to see how that worked out. Slow bug-riddled garbage that we're all now addicted to.
Most software gets to take it to more of an extreme then many engineering fields since there isn't physical danger. Its telling that the counter examples always use the potentially dangerous problems like medicine or nuclear engineering. The software in those fields are more stringent.
As opposed to something like using a flock of free blogger.com blogs to host media for an offsite project.
It cannot be the case that software engineers are labelled lazy for not building the at-scale solution to start with, but at the same time everyone wants to use their work, and there are next to no resources for said engineer to actually build the at scale solution.
> the path of least resistance for themselves.
Yeah because they're investing their own personal time and money, so of course they're going to take the path that is of least resistance for them. If society feels that's "unethical", maybe pony up the cash because you all still want to rely on their work product they are giving out for free.
I like OSS and everything.
Having said that, ethically, should society be paying for these? Maybe that is what should happen. In some places, we have programs to help artists. Should we have the same for software?
You realize, there are people who think differently? Some people would argue that if you keep working on problems you don't have but might have, you end up never finishing anything.
It's a matter of striking a balance, and I think you're way on one end of the spectrum. The vast majority of people using Julia aren't building nuclear plants.
Refusing to fix a problem that hasn't appeared yet, but has been/can be foreseen - that's different. I personally wouldn't call it unethical, but I'd consider it a negative.
The issues are only fundamental with that architecture. Using a separate repo for each package, like the Arch User Repos, does not have the same problems.
vcpkg is just a general tire fire.
Much better to start with an API. Then you can have the server abstract the store and the operations - use git or whatever - but you can change the store later without disrupting your clients.
Mostly to avoid downloading the whole repo/resolve deltas from the history for the few packages most applications tend to depend on. Especially in today's CI/CD World.
It relies on a git repo branch for stable. There are yaml definitions of the packages including urls to their repo, dependencies, etc. Preflight scripts. Post install checks. And the big one, the signatures for verification. No binaries, rpms, debs, ar, or zip files.
What’s actually installed lives in a small SQLite database and searching for software does a vector search on each packages yaml description.
Semver included.
This was inspired by brew/portage/dpkg for my hobby os.
Sure, eventually you run into scaling issues, but that's a first world problem.
As it is, this comment is just letting out your emotion, not engaging in dialogue.
The point being, if you're not sure whether your project will ever need to scale, then it may not make sense to reinvent the wheel when git is right there (and then invent the solution for hosting that git repo, when Github is right there), letting you spend time instead on other, more immediate problems.
I am sure there's value having a vision for what your scaling path might be in the future, so this discussion is a good one. But it doesn't automatically mean that git is a bad place to start.
This has happened to me a few times now. The last one was a fantastic article about how PG Notify locks the whole database.
In this particular case it just doesn’t make a ton of sense to change course. Im a solo dev building a thing that may never take off, so using git for plug-in distribution is just a no brainer right now. That said, I’ll hold on to this article in case I’m lucky enough to be in a position where scale becomes an issue for me.
Turns out Go module will not accept package hosted on my Forgejo instance because it asks for certificate. There are ways to make go get use ssh but even with that approach the repository needs to be accessible over https. In the end, I cloned the repository and used it in my project using replace directive. It's really annoying.
No, that's false. You don't need anything to be accessible over HTTP.
But even if it did, and you had to use mTLS, there's a whole bunch of ways to solve this. How do you solve this for any other software that doesn't present client certs? You use a local proxy.
Personally my view is that the main problem when they do this is that it gets much harder for non-technical people to contribute. At least that doesn't apply to package managers, where it's all technical people contributing.
There are a few other small problems - but it's interesting to see that so many other projects do this.
I ended up working on an open source software library to help in these cases: https://www.datatig.com/
Here's a write up of an introduction talk about it: https://www.datatig.com/2024/12/24/talk.html I'll add the scale point to future versions of this talk with a link to this post.
Sqlite data is paged and so you can get away with only fetching the pages you need to resolve your query.
https://phiresky.github.io/blog/2021/hosting-sqlite-database...
But that's different from how you collect the data in a git repository in the first place - or are you suggesting just putting a Sqlite file in a git repository? If so I can think of one big reason against that.
But if you are I wouldn't recommend it.
PR's won't be able to show diff's. Worse, as soon as multiple people send a PR at once you'll have a really painful merge to resolve, and GitHub's tools won't help you at all. And you can't edit the files in GitHub's web UI.
I recommend one file per record, JSON, YAML, whatever non-binary format you want. But then you get:
* PR's with diff's that show you what's being changed
* Files that technical people can edit directly in GitHub's web editor
* If 2 people make PR's on different records at once it's an easy merge with no conflicts
* If 2 people make PR's on the same record at once ... ok, you might now have a merge conflict to resolve but it's in an easy text file and GitHub UI will let you see what it is.
You can of course then compile these data files into a SQLite file that can be served in a static website nicely - in fact if you see my other comments on this post I have a tool that does this. And on that note, sorry, I've done a few projects in this space so I have views :-)
I don't get what is so bad about shallow clones either. Why should they be so performance sensative?
If 83GB (4MB/fork) is "too big" then responsibility for that rests solely on the elective centralization encouraged by Github. I suspect if you could go and total up the cumulative storage used by the nixpkgs source tree distributed on computers spread throughout the world, that is many orders of magnitude larger.
Cut out the middle man, directly serve the query response to the package manager client.
(I do immediately see issues stemming from the fact that you cant leverage features like edge caching this way, but I'm not really asking if its a good solution, im more asking if its possible at all)
Anything where you are opening a TCP connection to a hosted SQL server is a non-starter. You could hypothetically have so many read replicas that no one could blow anyone else up, but this would get to be very expensive at scale.
Something involving SQLite is probably the most viable option.
Also Stackoverflow exposes a SQL interface so it isn't totally impossible.
All of the complexity lives on the client. That makes a lot of sense for a package manager because it's something lots of people want to run, but no one really wants to host.
[0] https://fossil-scm.org
Homebrew uses OCI as its backend now, and I think every package manager should. It has the right primitives you expect from a registry to scale.
I quite like the hackage index, which is an append-only tar file. Incremental updates are trivial using HTTP range requests making hosting it trivial as well.
If it didn't work we would not have these massive ecosystems upsetting GitHub's freemium model, but anything at scale is naturally going to have consequences and features that aren't so compatible with the use case.
One thing that still seems absent is awareness of the complete takeover of "gadgets" in schools. Schools these days, as early as primary school, shove screens in front of children. They're expected to look at them, and "use" them for various activities, including practicing handwriting. I wish I was joking [1].
I see two problems with this.
First is that these devices are engineered to be addictive by way of constant notifications/distractions, and learning is something that requires long sustained focus. There's a lot of data showing that under certain common circumstances, you do worse learning from a screen than from paper.
Second is implicitly it trains children to expect that anything has to be done through a screen connected to a closed point-and-click platform. (Uninformed) people will say "people who work with computers make money, so I want my child to have an ipad". But interacting with a closed platform like an ipad is removing the possibilities and putting the interaction "on rails". You don't learn to think, explore and learn from mistakes, instead you learn to use the app that's put in front of you. This in turn reinforces the "computer says no" [2] approach to understanding the world.
I think this is a matter of civil rights and freedom, but sadly I don't often see "civil rights" organizations talk about this. I think I heard Stallman say something along these lines once, but other than that I don't see campaigns anywhere.
[1] https://www.letterjoin.co.uk/
[2] https://youtu.be/eE9vO-DTNZc
Scaling that data model beyond projects the size of the Linux kernel was not critical for the original implementation. I do wonder if there are fundamental limits to scaling the model for use cases beyond “source code management for modest-sized, long-lived projects”.
Consider vcpkg. It’s entirely reasonable to download a tree named by its hash to represent a locked package. Git knows how to store exactly this, but git does not know how to transfer it efficiently.
Naïvely, I’d expect shallow clones to be this, so I was quite surprised by a mention of GitHub asking people not to use them. Perhaps Git tries too hard to make a good packfile?..
Meanwhile, what Nixpkgs does (and why “release tarballs” were mentioned as a potential culprit in the discussion linked from TFA) is request a gzipped tarball of a particular commit’s files from a GitHub-specific endpoint over HTTP rather than use the Git protocol. So that’s already more or less what you want, except even the tarball is 46 MB at this point :( Either way, I don’t think the current problems with Nixpkgs actually support TFA’s thesis.
So the phrase the article says "Package managers keep falling for this. And it keeps not working out" I feel that's untrue.
The most issue I have with this really is "flakes" integration where the whole recipe folder is copied into the store (which doesn't happen with non-flakes commands), but that's a tooling problem not an intrinsic problem of using git
This entire blog is just a waste of time for anyone reading it.
Well that’s an extremely rude thing to say.
Personally I thought it was really interesting to read about a bunch of different projects all running into the same wall with Git.
I also didn’t realize that Git had issues with sparse checkouts. Or maybe author meant shallow? I forget.
O(1) beats O(n) as n gets large.
The most pain probably just becomes from the hugeness of Nixpkgs, but I remain an advocate for the huge monorepo of build recipes.
> The problem was that go get needed to fetch each dependency’s source code just to read its go.mod file and resolve transitive dependencies. Cloning entire repositories to get a single file.
I have also had inconsistent performance with go get. Never enough to look closely at it. I wonder if I was running into the same issue?
Python used to have this problem as well (technically still does, but a large majority of things are available as a wheel and PyPI generally publishes a separate .metadata file for those wheels), but at least it was only a question of downloading and unpacking an archive file, not cloning an entire repo. Sheesh.
Why would Go need to do that, though? Isn't the go.mod file in a specific place relative to the package root in the repo?
> The hosting problems are symptoms. The underlying issue is that git inherits filesystem limitations, and filesystems make terrible databases.
Does this mean mbox is inherently superior to maildir? I really like the idea of maildir because there is nothing to compact but if we assume we never delete emails (on the local machine anyways), does that mean mbox or similar is preferable over maildir?
For example, we use Hugo to provide independent Go package URLs even though the code is hosted on GitHub. That makes migrating away from GitHub trivial if we ever choose to do so (Repo: https://github.com/foundata/hugo-theme-govanity; Example: https://golang.foundata.com/hugo-theme-dev/). Usage works as expected:
Edit: Formattingfor every project that managed to out-grow ext4/git there were a hundred that were well-served and never needed to over-invest in something else
Seems possible if every git client is also a torrent client.
It being unoptimal bandwidth wise is frankly just a technical hurdle to get over it, with benefits well worth the drawback
I feel sometimes like package management is a relatively second-class topic in computer science (or at least among many working programmers). But a package manager's behavior can be the difference between a grotesque, repulsive experience and a delightful, beautiful one. And there aren't quite yet any package managers that do well everything that we collectively have learned how to do well, which makes it an interesting space imo.
Re: Nixpkgs, interestingly, pre-flakes Nix distributes all of the needed Nix expressions as tarballs, which does play nice with CDNs. It also distributes an index of the tree as a SQLite database to obviate some of the "too many files/directories" problem with enumerating files. (In the meantime, Nixpkgs has also started bucketing package directories by name prefix, too.) So maybe there was a lesson learned here that would be useful to re-learn.
On the other hand, IIRC if you use the GitHub fetcher rather than the Git one, including for fetching flakes, Nix will download tarballs from GitHub instead of doing clones. Regardless, downloading and unpacking Nixpkgs has become kinda slow. :-\
"Use a database" isn't actionable advice because it's not specific enough
>With release 2.45, Git has gained support for the “reftable” backend to read and write references in a Git repository. While this was a significant milestone for Git, it wasn‘t the end of GitLab’s journey to improve scalability in repositories with many references. In this talk you will learn what the reftable backend is, what work we did to improve it even further and why you should care.
https://www.youtube.com/watch?v=0UkonBcLeAo
Also see Scalar, which Microsoft used to scale their 300GiB Windows repository, https://github.com/microsoft/scalar.
The index is used for all lookups; it can also be generated or incrementally updated client-side to accommodate local changes.
This has worked fine for literally decades, starting back when bandwidth and CPU power was far more limited.
The problem isn’t using SCM, and the solutions have been known for a very long time.
Or does fossil itself still have the same issues ?
That is such an insane default, I'm at a loss for words.
https://news.ycombinator.com/item?id=45257349
[0] https://fossil-scm.org
Every, ...king time, when I read something like "RFC 2789 introduced a sparse HTTP protocol." my brain suffers from a short-circuit. BTW: RFC 2789 is a "Mail Monitoring MIB".
Not so smart, when we realize, that one of aspects of secure and reliable system is elimination of ambiguities.