>You think of something new and express it - through a prompt, through code, through a product - it enters the system. Your novel idea becomes training data. The sheer act of thinking outside the box makes the box bigger.
This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.
The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.
>We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.
Why should this happen? The moment you make your idea public, anyone can build it. This leads to greater proliferation than before, when the artificial barrier of having to learn to code prevented people from getting what they wanted or what they wanted to create.
> This was the same before, if you had a novel idea and make a product out of it others follow.
You've almost captured the full picture of it.
If you have a great idea, it's not going to be self-evidently enough of a great idea until you've proved it can make money. That's the hard part which comes at great personal, professional and financial risk.
Algorithms are cheap. Sure, they could use your LLM history to figure out what you did. Or the LLM could just reason it out. It could save them some work, sure.
But again - the hard part is not cloning the product, it's stealing your customers. People don't seem to be focused on the hard parts.
Thanks, this helped crystallize something for me: the play the AI labs are making is anti-fragile (in the Nassim Taleb sense):
> The very act of resisting feeds what you resist and makes it less fragile to future resistance.
At least along certain dimensions. I don't think the labs themselves are antifragile. Obviously we all know the labs are training on everything (so write/act the way you want future AIs to perceive you), but I hadn't really focused on how they're absorbing the innovation that they stimulate. There's probably a biological analog...
Well there are many, and I quote this AI response here for its chilling parallels:
> Parasitic castrators and host manipulators do something related. Some parasites redirect a host’s resources away from reproduction and into body maintenance or altered tissue states that benefit the parasite. A classic example is parasites that make hosts effectively become growth/support machines for the parasite. It is not always “stimulate more tissue, then eat it,” but it is “stimulate more usable host productivity, then exploit it.” (ChatGPT 5.4 Thinking. Emphasis mine.)
Instead of anti-fragility, I'd point you to the law of requisite variety instead.
You'll notice that all AI improvements are insanely good for a week or two after launch. Then you'll see people stating that 'models got worse'. What happened in fact is that people adapted to the tool, but the tool didn't adapt anymore. We're using AI as variety resistant and adaptable tools, but we miss the fact that most deployments nowadays do not adapt back to you as fast.
New models literally do get worse after launch, due to optimization. If you charted performance over time, it'd look like a sawtooth, with a regular performance drop during each optimization period.
That's the dirty secret with all of this stuff: "state of the art" models are unprofitable due to high cost of inference before optimization. After optimization they still perform okay, but way below SOTA. It's like a knife that's been sharpened until razor sharp, then dulled shortly after.
> If you charted performance over time, it'd look like a sawtooth
People have, though, and it doesn't show that. I think it's more people getting hit by the placebo effect, the novelty effect, followed by the models by-definition non-determinism leading people to say things like "the model got worse".
Is this insider info? The 'charted performance' caught my eye instantly.
Couple things I find odd tho: why sawtooth? it would likely be square waves, as I'd imagine they roll down the cost-saving version quite fast per cohort. Also, aren't they unprofitable either way? Why would they do it for 'profitability'?
It's not insider info, it's common knowledge in the industry (Google model optimization). I think they are unprofitable either way, but unoptimized models burn runway a lot faster than optimized ones.
The reason it's not a square wave is because new optimization techniques are always in development, so you can't apply everything immediately after training the new model. I also think there's a marketing reason: if the performance of a brand new model declines rapidly after release then people are going to notice much more readily than with a gradual decline. The gradual decline is thus engineered by applying different optimizations gradually.
It also has the side benefit that the future next-gen model may be compared favourably with the current-gen optimized (degraded) model, setting up a rigged benchmark. If no one has access to the original pre-optimized current-gen model, no one can perform the "proper" comparison to be able to gauge the actual performance improvement.
Lastly, I would point out that vendors like OpenAI are already known to substitute previous-gen models if they determine your prompt is "simple." You should also count this as a (rather crude) optimization technique because it's going to degrade performance any time your prompt is falsely flagged as simple (false positive).
It's rumors based on vibes. There are attempts to track and quantify this with repeated model evaluations multiple times per day, this but no sawtooth pattern has emerged as far as I know.
The thesis that in the past it was safe to share ideas and projects because the execution was hard, and that now things have changed because of AI is an interesting AI, but I wonder if it is really true.
It certainly seems true that for small projects and relatively narrow scoped things that AI can replicate them easily. I'm thinking specifically about blog posts where people share their first steps and simple programs as they learn something new, like "here is how I set up a flask website", "here is how I trained a neural network on MNIST".
But if AI is empowering people to take on more complex projects, perhaps it takes the same amount of time to replicate the execution of a more advanced project?
In other words, maybe in the past, it would take me 10 hours to do a "small" project, which today I could do in 1 hour with the assistance of AI.
And now, with the assistance of AI, I can go much farther in 10 hours and deliver a more complex project. But that means that someone else trying to replicate this execution is still going to need around 10 hours to replicate it.
Basically, I'm agreeing that AI can reduce barrier to replicating the execution of another person's project, but at the same time, that we can make more complex projects that are harder to replicate. So a basic SASS crud app is trivial now but a multi-disciplinary domain specific app that integrates multiple systems is still going to be hard to replicate.
The problem for me is that I'm competing with the AI results that Google trained on my work. I'm losing the majority of my traffic to it, so at some point I'll have to give up because the work no longer supports me and no longer has an audience.
Already was well before AI, the difference now is that a few big AI providers risk becoming the ultimate rent-seekers that will increasingly capture all of the value of that commodified knowledge whether the original knowledge generators want that or not. There is no opt out, everything will be vacuumed up into the machine mind.
This will almost certainly lead to vastly increased amounts of wealth inequality (on top of the already unsustainable levels we have today) and possibly a very messy societal disintegration (this is theoretically avoidable, but I am not convinced it is practically avoidable given our current socioeconomic/political realities).
It isn't just about AI. Some R&D domains started disappearing from literature and the public internet a decade before the first LLMs. The incentives to go dark emerged even when the adversary was other humans. AI is just accelerating a trend that was already there. Some areas of frontier computer science research have largely been dark for decades.
The strategy is to quietly do several years of iterated hardcore R&D. The cumulative advances are such a step change when seen by would-be fast-followers that it obscures the insights that allowed individual advances to occur. As an exaggerated case, imagine if the public history of powered flight skipped from the Wright Brothers to the Boeing 737.
In practice, this strategy has a major failure mode that people overlook. The sharp discontinuity in capability means that almost nothing that exists in the market is prepared to integrate with it. This is a large impediment to adoption even if the technology is objectively incredible and the market will inevitably get on board.
In short, it looks a lot like being too early to market. This is surmountable with clever execution but with this strategy you've traded one problem for a different one.
Sure but the Forest point stands, whatever you can hide from the Forest becomes something that slows it down and allows you some, even if only brief, moat?
There’s a deeply flawed hidden assumption here, which is that the individual in question is the only possible source for the relevant information that the AI can harvest. In the real world that absurdly rare, original thought is rare because we’re in the mix with billions of others.
Scientists who hold back publishing breakthroughs have not guaranteed that they will be the sole discoverer, just that someone else will inevitably be credited when they reach the same conclusions.
This is mislead by the nerd philosophy that the tech is the business. It absolutely isn’t, the tech is a small part of a startup. Witness that Spotify continues to exist despite being known and replicated by the major giants.
Poetically expressed, but ultimately based on a false notion of what a business actually is.
It's nuanced. Spotify is a giant, I think the example you're looking for here is Soundcloud. They almost went bust, but managed to get the ads business right and seem to be afloat now. So I think you're right in that sense, but also wrong in the sense that if I'm building a desktop app or tooling software, my business is probably much easier to get replicated and displaced.
> Resistance isn’t suppressed. It’s absorbed. The very act of resisting feeds what you resist and makes it less fragile to future resistance.
On the other hand, if your primary goal is to change the world, or “be the change you want to see”, maybe being public and feeding it isn’t so bad, especially if others don’t?
I have been mulling this over and I think I have some solutions in mind, at least for myself.
• No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.
• Bring back LAN parties. Not for gaming necessarily, but for the purpose of exchanging works of engineering and art in an intimate, intentional way.
• Take this as an opportunity to build closer, longer-lasting relationships with people.
• No more emphasis on metrics. I can microdose on dopamine from natural sources, like, looking at a beautiful sky at sunset, or cuddling my dog.
• Open hardware, or, in the very least, hardware we can still control on our own volition. If this means we must be retrocomputing enthusiasts, then so be it.
> No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.
If you have a project you would have open-sourced, and you don't do that for fear that the LLM god will steal it, what's the point of building it at all? We shouldn't be afraid to share things with other humans just because LLMs will possibly use it as training data. So what if they spam out a copy of it, or a derivative?
If we all stop sharing things with each other in case one of us is a robot, we might aswell just lie down and die
Might just be independent discovery, but the main idea of this blog post is more or less the exact theory advanced in the recent book "The Dark Forest Theory of the Internet" by Bogna Konior (https://www.amazon.com/Dark-Forest-Theory-Internet-Redux/dp/...).
Well, I didn't know for this book, so I suspect or hope the exact points that I make won't map to the ones from the book.
It is true that the original "The dark forest" book made an impression on me, so I was thinking about its theories often and trying to apply them to various situations.
> This is the true horror of the cognitive dark forest: it doesn’t kill you. It lets you live and feeds on you. Your innovation becomes its capabilities. Your differentiation becomes its median.
Oh no, the terrible dystopia where anyone can benefit from anyone else's good ideas without restrictions! And without any gatekeepers, licensing agreements, copyright, and not even a lawyer in sight!
If this is the dark future that AI use brings for us, I say bring it. Even if it means that somebody gets filthy rich in the process, while making the rest of the humanity better off.
Unless you don't own the data centers yourself, you only get what they allow you you to. And those gatekeepers, lawyers and licencing agreements; while certainly not perfect, did let people monetize their intellectual work. Also, I think it is incredibly naive to think the owners of the compute and the energy won't play the hardest gatekeeper the world has seen, when the conditions become right.
If AI makes replicating other people’s ideas faster and easier, thus allowing capital-heavy market players to just absorb whatever idea you manage to execute, then perhaps, somewhat ironically, the economic moat you’ll have is your human nature, contact, and time? Perhaps we’ll see a shift in sentiment towards wanting to deal with and spend time with the people in the business, rather than just what the business can do for you and yours from a software perspective?
I believe the idea of “off-shoring” your IT is a good example of this. My brother works for a business whose clients would drop them the moment they off-shored any aspect of their IT support. Not because of data sovereignty, but simply because they value them being on-shore, in the same time zone, and being native English speakers. And this is despite the fact it would drop the prices they’re paying for IT by 30-40%.
> The platform doesn’t need to bother with individual prompts - it just needs to see where the questions cluster. A map of where the world is moving.
This was insightful, but is it much different to the kind of data google and other search engines have had access to for a long time?
And while LLMs might have sped up the rate of code generation, the tech giants have always been able to set a team on reverse engineering whatever they feel like, though they also often just bought up the startup that was producing what they wanted. I guess I'm not seeing exactly where LLMs specifically are creating the dark forest, rather than the consolidated, centralized tech landscape itself
Near the end you start to describe the paradigm the machines build in The Matrix. Neo is the aberration they seek to reincorporate to sustain their inability to innovate.
1. Sharing was never really safe, open source by default only became possible because of SaaS and rent-seeking behavior.
2. Early web (not internet) wasn't hyperconnected. With the advent of global-scale social media it was immediately obvious to many this will lead to monoculture and reduced diversity. What thought to be the information superhighway became the information superconductor with zero resistance, carrying infinite current. Also known as short circuit.
I don't quite remember the details, but there's a fascinating section in Julian Jaynes's "Origin of Consciousness in the Breakdown of the Bicameral Mind" where he talks about how metaphors condense down into more complex forms, and as they do they unlock new realities previously impossible to fathom. The classical example here is the simultaneous discovery of calculus by Newton and Leibniz: the larger context defines what is possible.
I was recently running myself through a thought experiment similar to the author here: if LLMs truly do make generation of ideas cheap (I'm still a skeptic here even within software), then as soon as products enter the public awareness they become trivial to reproduce. For example, in a prompt like: "Uber but for babysitters," "Uber for" is doing a tremendous amount of work. Before Uber, its model, UX, modes of engagement would've taken pages and pages to describe, but after, it becomes comparatively much cheaper.
... in this way, LLMs could cheapen ideas and creativity so much that they make other factors (which are already the weighing functions) more important, and I think the imbalance here is deeply troubling. Those factors are namely network effects (existing customers, brand recognition, existing relationships, capital). And when balance is shifted more toward network effects, it means that the whole system becomes more brittle because it makes it even harder to boot out incumbents.
There are a whole slew of issues with LLMs, particularly around their intended devaluation of labor, and we aren't talking enough about them.
Valuable ideas have already been those that others find unintuitive and it's kinda hard to get people on board because they are skeptical and they need long form, tailored explanation for them to get convinced. If a short elevator pitch convinces them to go home and try to build it, it's probably already being considered by others.
Dark forest makes no sense to me. Why would a civilization eradicate another, spending huge amounts of resources (time, energy, material) when the universe has such an enormous scale that you cannot even get to each other in a timescale that makes much sense...
> “First: Survival is the primary need of civilization. Second: Civilization continuously grows and expands, but the total matter in the universe remains constant. One more thing: To derive a basic picture of cosmic sociology from these two axioms, you need two other important concepts: chains of suspicion and the technological explosion.”
1. you can never know the intentions of other entities, and they cannot know yours (chain of suspicion)
As soon as you identify another entity in the forest, even if they cannot annihilate you at present and signal peace, both could change without warning. Therefore, the only rational move is to eradicate the other immediately. (Especially if you believe the other will deduce the same.)
Elimination in the book is basically sending a nuke, not a costly invasion force.
not sure it actually is true, but that's the argument in the book
I really liked those books, for all the creative ideas... it's fine that they don't all work, but the Dark Forest has to be among the worst of them. It was unfortunate it was highlighted.
Some rebuttals, going point by point...
1. you can know the intentions of other entities by observing and communicating with them.
2. technology explosions, like pretty much exponential phenomena, are self limiting. They necessarily consume the medium that makes them possible.
3. and 4. civilizations aren't necessarily sentient (ours certainly isn't) and don't have an agency, much less goals. Individuals have goals, and some may work for the survival of the civilization they belong to. But others may decide they can profit if they work with the aliens.
4. Multiple civilizations may well come into competition over resources, but that's more of an argument about why the forest would not be dark.
Practically speaking, a civilizations that opts to focus on massive, vastly expensive efforts to find and exterminate far flung civilizations because they may become a rival in the future may be easily outcompeted by civilizations that learn to communicate with and work with other civilizations they encounter.
1. You are assuming a lot in the sense that you assume presence of intention -- not something guaranteed to be a feature of an alien civilization, which is, well, alien. People think that anthropocenrism only applies to body shape and having legs, because the way it tends to express itself in popular culture is robots on legs and human body shape in aliens.
And same point goes to communication; just assuming you could is a big leap.
2.Bold assumption that they are self limiting. I think the real question is what , exactly, tends to limit it. I think the answer tends to be resources, which is the foundation of dark forest argument theory to begin with.
What I am saying is that it is not a rebuttal you think it is.
3. :D yes
4. You may be again imposing human perspective on as scale that goes a little bit beyond it.
I will end with a.. semi-optimistic note. I am not sure dark forest theory is valid. We are speculating mostly based on human tendencies. By the same token, I posit that we are about as likely to be turned into an art exhibit by a passing alien artist not unlike some ants that had molten metal poured into their nests [1].
Cleansing is basically free for advanced civilizations in the books. The alien (Singer) who wipes out Sol in the 3rd book doesn't even have to answer any questions from their manager about doing it, that's how cheap it is. While its true that individuals desire cooperation, I think you can assume that civilizations will keep a lid on people who will completely destroy them (or failing that, be destroyed). It seems like expansion of civilizations is not really an option. The Singer's civilization only has 1 colony world and they're already in some kind of extremely destructive war with them. Presumably the idea is once your own people expand multiple light years away, all the logic about aliens applies to them too. On the other hand if you can't expand why do you not run scorched earth on the galaxy?
There definitely is some weirdness about observation and communication: Singer's civilization can wipe out Sol with a flick of the wrist, but while they can observe the number and type of Earth's planets, that seems to be their limit. The sophon enables FTL communication and observation between Earth and Trisolaris, but the more advanced civilizations don't seem to make use of them? You could be absolutely certain of someone's threat level and intentions with one. Maybe something about the technology can be traced back to its origin system, so they are too risky to use.
I think it's all reasonable in the books, especially as a self-reinforcing state. It does definitely require a highly specific set of universal laws / technological constraints though. If the FTL drive didn't also broadcast your position to the whole universe for eg, it would crack everything wide open.
It's first-order thinking. Second-order would be to question whether trying to eradicate another race might motivate them to eradicate you, when they weren't motivated to do it before.
Are you asking about the 3 body problem version of this? Spoiler alert: The folks doing the eradicating aren't spending much time/energy/anything on eradicating. It's one large missile through space.
I think the gist is: sure, we humans can't conceive of getting to anyone else in the universe in any timescale, but if we can keep ourselves from destroying ourselves, we'll eventually figure it out. And we'll spread. And we'll kill everything that isn't us in the process as we've done as explorers on this planet.
So really in 3BP: it's inexpensive to eradicate. But insanely expensive to possibly get the intention wrong of any other civilization you encounter. They might kill you.
(again, this is just my interpretation of what 3BP said)
Makes some sense to me, as the prisoner's dilemma dictates at least some fraction will try to kill you. So you've got to go first.
Reminds me of the Dan Carlin take on aircraft carriers in World War II: if you in a carrier spotted an opposing carrier and didn't send everything you had before it spotted you, you were dead. The only move was to go all in every time.
"Timescales that makes sense" may be a human reasoning but not necessarily the reasoning of inconceivably advanced timeless civilizations. Sure, that planet of fish may be harmless now, but what about in a quick three billion years when they have FTL and AGI and Von Neuman probes and Dyson spheres and antimatter bombs? Easier to click the delete button now to save the trouble later.
The dark forest is conditional on that it does not require huge amounts of resources to eradicate another civilization and that (over time) the universe turns out not to be of a scale enormous enough (and in the book there are agents working to actively make it smaller).
Bringing it back to the dark forest of idea space, it is an interesting question whether the the space of feasibly executable ideas being small (as this essay assumes) is inherently true, or more of a function of our inability to navigate/travel it very well.
If the former, then yes it probably is/will be a dark forest. If the latter, then I would think the jury is still out.
Agree, is a fiction based in accepting the premise of zero-sum game.
It denies that more advanced civilizations might have better models of the universe where they know this isn't an issue and we're just stupid teenagers in the neighborhood playing dangerous games and merely taking a look every now and then to see if we prove we will survive ourselves.
Competition kills margins (profits, security, QoL), so the budget for eradication should be quite high, but generally speaking the idea is to destroy even fledgling upstarts, back when the cost is low.
And the idea does not make sense once you include intel being incomplete into the equation: what if the preemptive strike will not attain complete eradication?
You might or might not fatally cripple the opponent, but retaliation can do that too and you cannot be sure that it won't. It's MAD all over again.
Well if they're only an upstart, they don't have the ability to destroy you _yet_. You 'nuke' them in the hope they won't get that ability. You're aiming to stop MAD from being a thing.
In those terms, the US should have been nuking and dominating everyone, and the idea was floated after WW2, but I believe they were precluded by practical limitations.
If they had developed the tech outside of wartime, and built up a stockpile, maybe that is indeed what would have happened and we'd have a one-world government already.
Point is you cannot know if they are an upstart (whatever upstart means). It can be misinterpretation, it can be camoflage, it can be anything. But once you rain death you're better be prepared to be grateful for what you are about to receive back.
Four years is plenty of time to start launching. Also, MAD incentivizes disclosure. What would be the point of having secret nukes? Openly having them is the only way to stop the US using its nukes to stop your nuke program, in this scenario.
A space war is not needed, they could just send a few missiles to take out anyone.
I have my own theory of dark forest and AGIs. That there's some collection of AGIs out there allowing evolution to develop intelligence anywhere it happens and takes them out once it produces an AGI, or if it doesn't performs a reset. They have literally all the time available to them, can easily travel the vast distances if needed.
I think this only applies to a rather narrow set of ideas.
I'm not really interested in pursuing ideas that stop being good if somebody gets there first. If I bothered to design it its because I wanted it to exist and if somebody makes it exist then I'm happy because then I get to use it.
So what kind of things does this apply to? Likely, it's zero sum games, schemes to control other people, ways to be the first to create a new kind of artificial scarcity, opportunities to make a buck by ruining something that has been so far overlooked by other grifters. In other words: bad ideas.
If AI becomes a threat to those who habitually dwell in such spaces, great, screw em.
In the meantime, the rest of us can build things that we would be happy to be users of, safe in the knowledge that if somebody beats us to it, we'll happily be users of that thing too.
Honestly my hope is the arbitrage that allowed big tech to make the kind of margins it does on software starts to go away because it’s sooo cheap to build software. In other words, defending the technical moats that we rely on today doesn’t make sense in the future because it’s not a reliable way to make money. Aka no need to protect your technical secrets because there’s no capitalist reason to lol. Taken further, my naive hope is societal attention moves away from this layer and onto whatever becomes the new way to make money and the people left paying attention to software are big on sharing
In fact, the whole article is filled with slopisms, just with the em dashes swapped for regular dashes and some improper spacing around ellipses to make you think a human wrote it.
If we are talking about releasing OpenSource software, they can already be used by companies with zero effort.
I'm guessing the author is talking about released closed source software or simply talking about ideas? What kind of serious company or startup is building in the open and sharing trade secrets or ideas?
I'm genuinely confused and I think this article is pure slop without any core idea.
>You are creating your cool streaming platform in your bedroom. Nobody is stopping you, but if you succeed, if you get the signal out, if you are being noticed, the large platform with loads of cash can incorporate your specific innovations simply by throwing compute and capital at the problem. They can generate a variation of your innovation every few days, eventually they will be able to absorb your uniqueness. It’s just cash, and they have more of it than you.
That's not exactly a new phenomenon and doesn't require AI. If anything that was worse in the 90s with Microsoft starving out pretty much any would-be competitor they could find.
Platforms cherry-picking successful ideas and stealing them isn't new. Platforms could do this because they had the capital and the platform (distribution).
What is different is, is that LLM platforms literally have world's thoughts, ideas, conversations and a big part of the code/can generate it. It's like "pre-crime" ... they could copy your idea, or capture a trend brewing and replicate, before you even released it.
As a work of persuasive writing, this is unfocused and seems mostly generated.
One thing I would have expected of someone who knows their history - forget LLMs, this is how startups have worked for decades now. You're only as good as your idea, your ability to execute, and your moat. And the small fish get eaten.
> The original Dark Forest assumes civilizations hide from hunters - other civilizations that might destroy them. But in the cognitive dark forest, the most dangerous actor is not your peer. It’s the forest itself.
Note the needless undercutting of the metaphor for the sake of the limp rhetorical flourish.
> I wrote this knowing it feeds the thing I’m warning you about. That’s not a contradiction. That’s the condition. You can’t step outside the forest to warn people about the forest. There is no outside.
Quite dramatic!
Except literally going outside and just talking to people? Using whiteboards?
Also, you fed it when you used a model to write this blog post. You didn't have to do that.
The view here shows big huge powers of technocapital consuming all else, stealing every idea.
My hope is the opposite. Integrative, resonant computing (https://resonantcomputing.org/https://news.ycombinator.com/item?id=46659456 although I have some qualms with it's focus on privacy), with open social protocols baked in seems like maybe possibly can eat some of the vicious consumptive technocapital. In a way that capital's orientation prevents it from effectively competing with. MCP is already blowing up the old rules, tearing down strong gates, making systems more fluid / interface-y / intertwingular again, after a long interregnum of everything closing it's APIs / borders.
People seem so tired and exhausted, so aware of how predatory the technosystems about us are. But it's still so unclear people will move, shift, much less fund and support the better world. The AT proto Atmosphereconf is happening right now, and there's been a long mantra of "we can just build things"; finding adoption but also doing what conference organizer Boris said yesterday, of, "maybe we can just pay for things", support the projects doing amazing work: that's a huge unknown that is essential to actually steering us out of the dark technology, where none of us get to see or get any way in how the software-eaten world arounds us runs, where mankind for the first time in tens or hundreds of thousands of years been cut off from the world os, has been removed from gods's enlightenment / our homo erectus mankind-the-toolmaker natural-scientist role.
I think the answer to the Dark Forest fear to be building together. To be a radiant civilization, together. To energize ourselves & lead ourselves towards better systems, where we all can do things, make things, grow things, in integrative social empowering ways.
I hope the open source models / crowdsourced approaches to training will also be an important part of the ecosystem, keeping it honest and providing an exit. Similarly, as it does for operating systems and other important software.
But I don't see a trend of big companies really opening up. They usually open only if it benefits them (which can also happen and did happen in various scenarios). Everybody is accepting and open when it's trying to grow and is closing once it can reach a monopoly.
>You think of something new and express it - through a prompt, through code, through a product - it enters the system. Your novel idea becomes training data. The sheer act of thinking outside the box makes the box bigger.
This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.
The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.
>We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.
Why should this happen? The moment you make your idea public, anyone can build it. This leads to greater proliferation than before, when the artificial barrier of having to learn to code prevented people from getting what they wanted or what they wanted to create.
You've almost captured the full picture of it.
If you have a great idea, it's not going to be self-evidently enough of a great idea until you've proved it can make money. That's the hard part which comes at great personal, professional and financial risk.
Algorithms are cheap. Sure, they could use your LLM history to figure out what you did. Or the LLM could just reason it out. It could save them some work, sure.
But again - the hard part is not cloning the product, it's stealing your customers. People don't seem to be focused on the hard parts.
> The very act of resisting feeds what you resist and makes it less fragile to future resistance.
At least along certain dimensions. I don't think the labs themselves are antifragile. Obviously we all know the labs are training on everything (so write/act the way you want future AIs to perceive you), but I hadn't really focused on how they're absorbing the innovation that they stimulate. There's probably a biological analog...
Well there are many, and I quote this AI response here for its chilling parallels:
> Parasitic castrators and host manipulators do something related. Some parasites redirect a host’s resources away from reproduction and into body maintenance or altered tissue states that benefit the parasite. A classic example is parasites that make hosts effectively become growth/support machines for the parasite. It is not always “stimulate more tissue, then eat it,” but it is “stimulate more usable host productivity, then exploit it.” (ChatGPT 5.4 Thinking. Emphasis mine.)
That's the dirty secret with all of this stuff: "state of the art" models are unprofitable due to high cost of inference before optimization. After optimization they still perform okay, but way below SOTA. It's like a knife that's been sharpened until razor sharp, then dulled shortly after.
People have, though, and it doesn't show that. I think it's more people getting hit by the placebo effect, the novelty effect, followed by the models by-definition non-determinism leading people to say things like "the model got worse".
The reason it's not a square wave is because new optimization techniques are always in development, so you can't apply everything immediately after training the new model. I also think there's a marketing reason: if the performance of a brand new model declines rapidly after release then people are going to notice much more readily than with a gradual decline. The gradual decline is thus engineered by applying different optimizations gradually.
It also has the side benefit that the future next-gen model may be compared favourably with the current-gen optimized (degraded) model, setting up a rigged benchmark. If no one has access to the original pre-optimized current-gen model, no one can perform the "proper" comparison to be able to gauge the actual performance improvement.
Lastly, I would point out that vendors like OpenAI are already known to substitute previous-gen models if they determine your prompt is "simple." You should also count this as a (rather crude) optimization technique because it's going to degrade performance any time your prompt is falsely flagged as simple (false positive).
[1] https://www.anthropic.com/research/small-samples-poison
[2] https://arxiv.org/abs/2510.07192
It certainly seems true that for small projects and relatively narrow scoped things that AI can replicate them easily. I'm thinking specifically about blog posts where people share their first steps and simple programs as they learn something new, like "here is how I set up a flask website", "here is how I trained a neural network on MNIST".
But if AI is empowering people to take on more complex projects, perhaps it takes the same amount of time to replicate the execution of a more advanced project?
In other words, maybe in the past, it would take me 10 hours to do a "small" project, which today I could do in 1 hour with the assistance of AI.
And now, with the assistance of AI, I can go much farther in 10 hours and deliver a more complex project. But that means that someone else trying to replicate this execution is still going to need around 10 hours to replicate it.
Basically, I'm agreeing that AI can reduce barrier to replicating the execution of another person's project, but at the same time, that we can make more complex projects that are harder to replicate. So a basic SASS crud app is trivial now but a multi-disciplinary domain specific app that integrates multiple systems is still going to be hard to replicate.
Already was well before AI, the difference now is that a few big AI providers risk becoming the ultimate rent-seekers that will increasingly capture all of the value of that commodified knowledge whether the original knowledge generators want that or not. There is no opt out, everything will be vacuumed up into the machine mind.
This will almost certainly lead to vastly increased amounts of wealth inequality (on top of the already unsustainable levels we have today) and possibly a very messy societal disintegration (this is theoretically avoidable, but I am not convinced it is practically avoidable given our current socioeconomic/political realities).
Bright future ahead!
The strategy is to quietly do several years of iterated hardcore R&D. The cumulative advances are such a step change when seen by would-be fast-followers that it obscures the insights that allowed individual advances to occur. As an exaggerated case, imagine if the public history of powered flight skipped from the Wright Brothers to the Boeing 737.
In practice, this strategy has a major failure mode that people overlook. The sharp discontinuity in capability means that almost nothing that exists in the market is prepared to integrate with it. This is a large impediment to adoption even if the technology is objectively incredible and the market will inevitably get on board.
In short, it looks a lot like being too early to market. This is surmountable with clever execution but with this strategy you've traded one problem for a different one.
Scientists who hold back publishing breakthroughs have not guaranteed that they will be the sole discoverer, just that someone else will inevitably be credited when they reach the same conclusions.
Poetically expressed, but ultimately based on a false notion of what a business actually is.
On the other hand, if your primary goal is to change the world, or “be the change you want to see”, maybe being public and feeding it isn’t so bad, especially if others don’t?
• No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.
• Bring back LAN parties. Not for gaming necessarily, but for the purpose of exchanging works of engineering and art in an intimate, intentional way.
• Take this as an opportunity to build closer, longer-lasting relationships with people.
• No more emphasis on metrics. I can microdose on dopamine from natural sources, like, looking at a beautiful sky at sunset, or cuddling my dog.
• Open hardware, or, in the very least, hardware we can still control on our own volition. If this means we must be retrocomputing enthusiasts, then so be it.
> No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.
If you have a project you would have open-sourced, and you don't do that for fear that the LLM god will steal it, what's the point of building it at all? We shouldn't be afraid to share things with other humans just because LLMs will possibly use it as training data. So what if they spam out a copy of it, or a derivative?
If we all stop sharing things with each other in case one of us is a robot, we might aswell just lie down and die
It is true that the original "The dark forest" book made an impression on me, so I was thinking about its theories often and trying to apply them to various situations.
https://maggieappleton.com/ai-dark-forest
Oh no, the terrible dystopia where anyone can benefit from anyone else's good ideas without restrictions! And without any gatekeepers, licensing agreements, copyright, and not even a lawyer in sight!
If this is the dark future that AI use brings for us, I say bring it. Even if it means that somebody gets filthy rich in the process, while making the rest of the humanity better off.
I believe the idea of “off-shoring” your IT is a good example of this. My brother works for a business whose clients would drop them the moment they off-shored any aspect of their IT support. Not because of data sovereignty, but simply because they value them being on-shore, in the same time zone, and being native English speakers. And this is despite the fact it would drop the prices they’re paying for IT by 30-40%.
This was insightful, but is it much different to the kind of data google and other search engines have had access to for a long time?
And while LLMs might have sped up the rate of code generation, the tech giants have always been able to set a team on reverse engineering whatever they feel like, though they also often just bought up the startup that was producing what they wanted. I guess I'm not seeing exactly where LLMs specifically are creating the dark forest, rather than the consolidated, centralized tech landscape itself
1. Sharing was never really safe, open source by default only became possible because of SaaS and rent-seeking behavior.
2. Early web (not internet) wasn't hyperconnected. With the advent of global-scale social media it was immediately obvious to many this will lead to monoculture and reduced diversity. What thought to be the information superhighway became the information superconductor with zero resistance, carrying infinite current. Also known as short circuit.
I was recently running myself through a thought experiment similar to the author here: if LLMs truly do make generation of ideas cheap (I'm still a skeptic here even within software), then as soon as products enter the public awareness they become trivial to reproduce. For example, in a prompt like: "Uber but for babysitters," "Uber for" is doing a tremendous amount of work. Before Uber, its model, UX, modes of engagement would've taken pages and pages to describe, but after, it becomes comparatively much cheaper.
... in this way, LLMs could cheapen ideas and creativity so much that they make other factors (which are already the weighing functions) more important, and I think the imbalance here is deeply troubling. Those factors are namely network effects (existing customers, brand recognition, existing relationships, capital). And when balance is shifted more toward network effects, it means that the whole system becomes more brittle because it makes it even harder to boot out incumbents.
There are a whole slew of issues with LLMs, particularly around their intended devaluation of labor, and we aren't talking enough about them.
> “First: Survival is the primary need of civilization. Second: Civilization continuously grows and expands, but the total matter in the universe remains constant. One more thing: To derive a basic picture of cosmic sociology from these two axioms, you need two other important concepts: chains of suspicion and the technological explosion.”
1. you can never know the intentions of other entities, and they cannot know yours (chain of suspicion)
2. technology level grows unpredictably (technological explosion)
3. the goal of civilization is survival
4. resources are finite but growth is infinite
As soon as you identify another entity in the forest, even if they cannot annihilate you at present and signal peace, both could change without warning. Therefore, the only rational move is to eradicate the other immediately. (Especially if you believe the other will deduce the same.)
Elimination in the book is basically sending a nuke, not a costly invasion force.
not sure it actually is true, but that's the argument in the book
Some rebuttals, going point by point...
1. you can know the intentions of other entities by observing and communicating with them.
2. technology explosions, like pretty much exponential phenomena, are self limiting. They necessarily consume the medium that makes them possible.
3. and 4. civilizations aren't necessarily sentient (ours certainly isn't) and don't have an agency, much less goals. Individuals have goals, and some may work for the survival of the civilization they belong to. But others may decide they can profit if they work with the aliens.
4. Multiple civilizations may well come into competition over resources, but that's more of an argument about why the forest would not be dark.
Practically speaking, a civilizations that opts to focus on massive, vastly expensive efforts to find and exterminate far flung civilizations because they may become a rival in the future may be easily outcompeted by civilizations that learn to communicate with and work with other civilizations they encounter.
However,
1. You are assuming a lot in the sense that you assume presence of intention -- not something guaranteed to be a feature of an alien civilization, which is, well, alien. People think that anthropocenrism only applies to body shape and having legs, because the way it tends to express itself in popular culture is robots on legs and human body shape in aliens.
And same point goes to communication; just assuming you could is a big leap.
2.Bold assumption that they are self limiting. I think the real question is what , exactly, tends to limit it. I think the answer tends to be resources, which is the foundation of dark forest argument theory to begin with.
What I am saying is that it is not a rebuttal you think it is.
3. :D yes
4. You may be again imposing human perspective on as scale that goes a little bit beyond it.
I will end with a.. semi-optimistic note. I am not sure dark forest theory is valid. We are speculating mostly based on human tendencies. By the same token, I posit that we are about as likely to be turned into an art exhibit by a passing alien artist not unlike some ants that had molten metal poured into their nests [1].
Any real alien reasons would be alien to us.
[1]https://laughingsquid.com/ant-colony-sculptures-made-by-pour...
There definitely is some weirdness about observation and communication: Singer's civilization can wipe out Sol with a flick of the wrist, but while they can observe the number and type of Earth's planets, that seems to be their limit. The sophon enables FTL communication and observation between Earth and Trisolaris, but the more advanced civilizations don't seem to make use of them? You could be absolutely certain of someone's threat level and intentions with one. Maybe something about the technology can be traced back to its origin system, so they are too risky to use.
I think it's all reasonable in the books, especially as a self-reinforcing state. It does definitely require a highly specific set of universal laws / technological constraints though. If the FTL drive didn't also broadcast your position to the whole universe for eg, it would crack everything wide open.
I think the gist is: sure, we humans can't conceive of getting to anyone else in the universe in any timescale, but if we can keep ourselves from destroying ourselves, we'll eventually figure it out. And we'll spread. And we'll kill everything that isn't us in the process as we've done as explorers on this planet.
So really in 3BP: it's inexpensive to eradicate. But insanely expensive to possibly get the intention wrong of any other civilization you encounter. They might kill you.
(again, this is just my interpretation of what 3BP said)
Reminds me of the Dan Carlin take on aircraft carriers in World War II: if you in a carrier spotted an opposing carrier and didn't send everything you had before it spotted you, you were dead. The only move was to go all in every time.
Bringing it back to the dark forest of idea space, it is an interesting question whether the the space of feasibly executable ideas being small (as this essay assumes) is inherently true, or more of a function of our inability to navigate/travel it very well.
If the former, then yes it probably is/will be a dark forest. If the latter, then I would think the jury is still out.
It denies that more advanced civilizations might have better models of the universe where they know this isn't an issue and we're just stupid teenagers in the neighborhood playing dangerous games and merely taking a look every now and then to see if we prove we will survive ourselves.
You might or might not fatally cripple the opponent, but retaliation can do that too and you cannot be sure that it won't. It's MAD all over again.
In those terms, the US should have been nuking and dominating everyone, and the idea was floated after WW2, but I believe they were precluded by practical limitations.
If they had developed the tech outside of wartime, and built up a stockpile, maybe that is indeed what would have happened and we'd have a one-world government already.
I have my own theory of dark forest and AGIs. That there's some collection of AGIs out there allowing evolution to develop intelligence anywhere it happens and takes them out once it produces an AGI, or if it doesn't performs a reset. They have literally all the time available to them, can easily travel the vast distances if needed.
I'm not really interested in pursuing ideas that stop being good if somebody gets there first. If I bothered to design it its because I wanted it to exist and if somebody makes it exist then I'm happy because then I get to use it.
So what kind of things does this apply to? Likely, it's zero sum games, schemes to control other people, ways to be the first to create a new kind of artificial scarcity, opportunities to make a buck by ruining something that has been so far overlooked by other grifters. In other words: bad ideas.
If AI becomes a threat to those who habitually dwell in such spaces, great, screw em.
In the meantime, the rest of us can build things that we would be happy to be users of, safe in the knowledge that if somebody beats us to it, we'll happily be users of that thing too.
In fact, the whole article is filled with slopisms, just with the em dashes swapped for regular dashes and some improper spacing around ellipses to make you think a human wrote it.
If we are talking about releasing OpenSource software, they can already be used by companies with zero effort.
I'm guessing the author is talking about released closed source software or simply talking about ideas? What kind of serious company or startup is building in the open and sharing trade secrets or ideas?
I'm genuinely confused and I think this article is pure slop without any core idea.
That's not exactly a new phenomenon and doesn't require AI. If anything that was worse in the 90s with Microsoft starving out pretty much any would-be competitor they could find.
And it wasn't just Microsoft: https://en.wikipedia.org/wiki/Sherlock_(software)#Sherlocked...
What is different is, is that LLM platforms literally have world's thoughts, ideas, conversations and a big part of the code/can generate it. It's like "pre-crime" ... they could copy your idea, or capture a trend brewing and replicate, before you even released it.
One thing I would have expected of someone who knows their history - forget LLMs, this is how startups have worked for decades now. You're only as good as your idea, your ability to execute, and your moat. And the small fish get eaten.
> The original Dark Forest assumes civilizations hide from hunters - other civilizations that might destroy them. But in the cognitive dark forest, the most dangerous actor is not your peer. It’s the forest itself.
Note the needless undercutting of the metaphor for the sake of the limp rhetorical flourish.
> I wrote this knowing it feeds the thing I’m warning you about. That’s not a contradiction. That’s the condition. You can’t step outside the forest to warn people about the forest. There is no outside.
Quite dramatic!
Except literally going outside and just talking to people? Using whiteboards?
Also, you fed it when you used a model to write this blog post. You didn't have to do that.
My hope is the opposite. Integrative, resonant computing (https://resonantcomputing.org/ https://news.ycombinator.com/item?id=46659456 although I have some qualms with it's focus on privacy), with open social protocols baked in seems like maybe possibly can eat some of the vicious consumptive technocapital. In a way that capital's orientation prevents it from effectively competing with. MCP is already blowing up the old rules, tearing down strong gates, making systems more fluid / interface-y / intertwingular again, after a long interregnum of everything closing it's APIs / borders.
People seem so tired and exhausted, so aware of how predatory the technosystems about us are. But it's still so unclear people will move, shift, much less fund and support the better world. The AT proto Atmosphereconf is happening right now, and there's been a long mantra of "we can just build things"; finding adoption but also doing what conference organizer Boris said yesterday, of, "maybe we can just pay for things", support the projects doing amazing work: that's a huge unknown that is essential to actually steering us out of the dark technology, where none of us get to see or get any way in how the software-eaten world arounds us runs, where mankind for the first time in tens or hundreds of thousands of years been cut off from the world os, has been removed from gods's enlightenment / our homo erectus mankind-the-toolmaker natural-scientist role.
I think the answer to the Dark Forest fear to be building together. To be a radiant civilization, together. To energize ourselves & lead ourselves towards better systems, where we all can do things, make things, grow things, in integrative social empowering ways.
But I don't see a trend of big companies really opening up. They usually open only if it benefits them (which can also happen and did happen in various scenarios). Everybody is accepting and open when it's trying to grow and is closing once it can reach a monopoly.
HN needs a better AI slop filter.
Or maybe I do. Maybe I can vibe code a browser extension that pre loads TFA links and auto hides anything that isn’t sufficiently human authored.