I feel like if people keep using AI as a blanket term for "inequality" and "inequality accelerants" then yeah, it's "AI"'s fault. When in reality the whole thing needs to be decoupled..
"Gleefully taking away people's livelihoods will be met with violence, and nothing good will come of it." - fixed.
AI is massively asymmetric in its benefits, which are overwhelmingly concentrated among those with extreme capital, and the authoritarians they're aligned with.
The benefits for them include:
- replacing workers with lower quality (but good enough) AI solutions, which degrade the quality of nearly every product or service for the consumer, but not by enough to offset the labor cost savings
- mass surveillance at low cost, a way to take the absurd amounts of data humanity now produces, and use to subjugate them
- propaganda/deception/misinformation, a new vector for propaganda which people are naively inclined to trust. bonus points for the "flooding the zone" strategy which AI makes easier
Benefits to the worker:
- lower cost of goods and services (but not for you, silly - they'll still be taxing you via inflation to fund their wars of conquest)
I wholeheartedly agree with and encourage this kind of academic distinction. However...
Until people with billions of dollars behind them do something with that money to offset the financial hardship that they're knowingly - and gleefully - bringing to others... The distinction has no practical use.
(And before someone says "that's the government's job!", consider how much lobbying money is coming from CEOs and companies who know the domain best and are agitating for better financial and social safeguards for all. None, naturally.)
We often look back on earlier stages in world history like we're somehow more advanced, or inherently smarter, than past societies. But one of the things made clear by the way this problem lines up perfectly with conflict during the industrial revolution (including the innovators flagrantly violating the law in order to win their advantage) is that for all our technological sophistication, we haven't really gotten better at the hard, human things: social coordination, planning, democracy. (Perhaps that's because we're still living under the same system that the industrial revolution finally birthed.)
How much actual money do you think the “people with billions of dollars” have in comparison to the needs of the population as a whole? I think you’re very confused about where the actual income in the economy goes.
That is the question society is currently asking with articles like this one.
Given that (allegedly) "your salary" won't be the answer for a significant chunk of the population soon, and all that money will instead (allegedly) go to the bosses doing the firings, and the AI companies they employ instead.
There is currently more than enough total production for people to live quite well.
If AIs simply replace people, the same total work gets done. It's just a matter of who gets the profits from it.
It won't be that simple, to be sure. Nonetheless we already produce far more than subsistence, and there's no reason why a UBI would change that. If it increases the price of some commodities because now everyone can buy them, I'm ok with that. It already horrifies me that some go hungry in the fattest nation in history.
Inflation is more likely when the net number of dollars increases without a corresponding increase to production. Taxing earners at a higher rate doesn’t do this. Printing money at the central bank does.
I don’t understand why welfare is the answer. To me it seems we’ve super failed if that’s the case — just brings everyone down except a few ultra rich people.
UBI is not welfare. It is just a livable minimum wage, for everyone who works. For those who cannot work, it replaces welfare, but that is not it's primary purpose.
As a welfare replacement, it is much more efficient, since there is no effort spent determining who qualifies. People can spent their money however they want, rather than the patchwork of separate programs we have now.
It doesn't need to bring anyone down. It's just a different way of distributing what we already receive. For you ordinary workers, they will receive $X in a monthly check, and their salary can be reduced by $X (since the minimum wage can also be abolished).
That does mean that the desirability of some jobs will shift. Good. We have a bunch of very dirty jobs being done for minimum wage, even though demand is extremely high. I'd love to see the garbage men and chicken processors get more money for their dangerous work.
And if I get less for my cushy desk job, oh well. Especially since we seem to be putting all of the effort into replacing me, and none into the jobs that come with hazards to life and limb.
A UBI is basically impossible to implement on a large scale without there being significant downsides. In what world does increasing the budget by a trillion dollars or more work out well?
If the promises of AGI pan out, there will be nothing a human will be able to do better than an AI. If humans can't contribute economically, what else could things look like?
well inflation is equivalent to a flat wealth tax that doesn't consider insoluble assets, and is entirely in the hands of the government that imposes the UBI.
"cause increased prices for consumer/essential goods" is what you meant (since buying power is moved to people who are reliant on buying them), but this is a one-time transition to a new equilibrium (so is mitigable by increasing the UBI to account for it), not a constant ever-looming devaluator.
True, but again, the other points are more damning.
We're talking about an increased federal budget in the hundreds of billions/trillions to support such a UBI. That will cause a massive increase in taxation on the people who can still find jobs.
To make matters worst, the government in 10-15 years will likely be spending ~25% of it's budget on interest payments alone. Hiking the federal budget up even more sounds like a hard sell.
I’m not saying it would be revenue neutral, but a UBI would (or should) eliminate a bunch of various other entitlements. Even social security should be relatively non controversial to get rid of.
Everyone. That includes the small number of people hoarding a majority of the wealth. Everyone needs to contribute to the wellbeing of society as a whole and nobody is exempt.
I'd like to emphasize that the above should be immediately obvious. The fact that it's not does not bode well for humanity's future.
Billionaires simply _should not exist_. The fact that the power to shape societies is concentrated in so few can account for many of the existential threats we face today. AI is not "the problem", it's merely the latest symptom of our broken system and the prioritization of the wrong goals and outcomes.
AI, automation, and globalization would all be uncontroversially brilliant if the benefits weren't distributed like "150% of net benefit to capital, -50% net benefit to labor, better hope some of it trickles down brokie!"
Good luck taking away the detached single family homes, pickup trucks, SUVs, commercial flights, out of season fruits/vegetables, and imported manufactured goods. The people that expect those things are the “
small number of people hoarding a majority of the wealth”, and there are quite a few of them (probably 1B+ worldwide).
There is a wild difference between asking people not to eat apples in December in the northern hemisphere and asking people not to move wealth around to avoid paying taxes when they have more resources available to them than multiple countries.
Comparing middle income 1st world citizens to dragons on their mountains of gold is disingenuous at best.
Yes, but in the opposite way to what you think. Do the math, there's billions of people consuming the overly cheap, massively subsidized goods and services parent listed; there's only so many billionaires and they have only so many billions, and most of it is just fake bullshit accounting paper-shuffling anyway.
My comment did not compare those enjoying detached single family homes and large vehicles and flying to vacations with the richest few thousand people in the US and Europe.
Avicebron brought up inequality as the root cause.
DavidPiper indicated only the few thousand richest as the root cause.
Rayiner questioned if those few thousand richest have the means or capacity to reduce inequality.
estimator7292 responded that everyone has to help reduce inequality.
To which I wanted to point out exactly what would need to be sacrificed, because it would involve sacrifices among the top 10% to 20% of the world (constituting many on this forum) which those 10% to 20% would not even consider a "luxury". It is easy to claim a billionaire's private jet is an expendable luxury exacerbating inequality, but the reality is the bar is far lower than that (see statistics on energy used per capita, which can serve as a good proxy for which side of the inequality the lifestyle you might expect is).
That is why we are all mostly talk and no walk, because push comes to shove, we can't even get a sufficient fossil fuel tax passed to slow climate change for our own descendants, much less voluntarily decrease our standard of living solely for the benefit of others in the world.
Money is just a way of keeping track to how high of a fraction of the future output of the civilization any one person or entity is entitled to. This is by consent.
Generalize "people with billions of dollars" to all Americans - and then this logic will start to work fully.
"Until people with salaries of many dollars per hour behind them do something with that money to offset the financial hardship that they're knowingly - and gleefully - bringing to others 90% of the world that live on less than 2 dollars per day... The distinction has no practical use."
Moreover, these people do not simply lobby the government, but directly elect it, and actually have many times more money at their disposal than the rest of the world.
America only has the shallowest appearance of a democracy where voters get to control who is elected.
The electoral college system, coupled with it's winner-takes-all implementation in most states, means that voting is a sham for 80% of the population. The other 20% live in a swing state and their vote can at least potentially affect the outcome of an election, but even there "your vote" will literally be cast opposite to what you put on the ballet unless you end up being part of the winning majority.
Ownership of the economy is split roughly 30/30/30 between the top 10%/1%/.1% with the bottom 90% of people making an entrance as the rounding error. If you picture "the owners" by drawing a representative sample of 10 people:
1 Normal person
3 Doctors / Lawyers / Engineers ($1M+ net worth)
3 Successful Entrepreneurs ($10M+ net worth)
3 Ultrawealthy ($50M+ net worth)
It's worth putting these through the fundamental theorem of capitalism (rich people get paid for being rich in proportion to how rich they are) to solve for passive income from asset appreciation. Plugging in the crude figure of 10%/yr (feel free to bring your own rate):
1 Normal person
3 Professional ($100k+/yr passive)
3 Successful ($1M+/yr passive)
3 Wealthy ($5M+/yr passive)
You get your incentives where you get your money. Most people get most of their money from working, but the wealthy get most of their money and incentives from the assets they own. In between it's in between.
Are the in-betweeners part of the problem? Sure, but we have a foot on either side of the problem. We could get hype for many of the plausible solutions to aggregate labor oversupply (e.g. shorter workweeks) even if it meant our stocks went down. Not so for 6/10 people in that sample. The core problem is still that the economy is mostly inhabited by people who work for a living but mostly owned by people who own things for a living and all of the good solutions to the problem require rolling that back a little against a backdrop that, absent intervention, stands to accelerate it a lot.
EDIT: one more thing, but it's a big one: the higher ends of the wealth ladder have the enormous privilege of being able to engage in politics for profit rather than charity/obligation. A 10% chance of lobbying into place a policy that changes asset values by 10% is worth $1k to a "Professional", $50k to a "Wealthy", but $8B to Elon Musk. The fact that at increasing net worths politics becomes net profitable and then so net profitable as to allow hiring organizations of people to pursue means the upper edge of the distribution punches above its already-outsized weight in terms of political influence. It goes without saying that their brand of politics is all about pumping assets.
> Ownership of the economy is split roughly 30/30/30 between the top 10%/1%/.1%
This is simply not true and a cheap attempt at manipulation.
> If you picture "the owners" by
A practically useless characteristic. A more relevant would be to look at "the spender". Possessions is simply what hasn't yet been spent and poorly reflects the controlled resources.
> for passive income
Frankly, mentioning passive income in this context isn't even idiotic, it's a clinical diagnosis. Or, more likely, a cheap attempt at manipulation.
Salary (income) is a horrible choice to serve as the marker to determine a person's (family's) fair share contribution to the burden of paying the costs to operate a society. Not everyone is so poor that working for a living is a matter of survival.
I can think of only one universal marker that would assure every citizen shares the burden of paying for society's costs equally: wealth.
Adjusted in a manner that the financial impact of one thousand dollars to a full-time MacDonald's counter worker is transformed into a dollar amount that causes the same relative financial impact to everyone, all the way up to the wealthiest family in America.
Yep, that's why substitute teachers' interests are more zealously guarded by Congress than the interests of billionaires are. Teachers have wielded the enormous power they hold to get a <= $250 deduction for school supplies they purchase with their own money.
GP said “a substitute teacher” vs “a billionaire” - why have you decided to pretend they said something else?
You’re also flatly wrong, given you’ve utterly ignored the trivial things wealth buys (for starters), but hard to expect accuracy when basic honesty is so lacking.
If I understand what you're saying it's that as rich as they are, the amount of money the ultra-wealthy own just doesn't add up to nearly enough to give everyone a quality of life that they deserve / once had?
Perhaps what's happening is that in their attempts to reach a personal all-time high in their bank accounts the ultra-wealthy are destroying value and economic systems en mass with little regard to the efficiency of their money siphoning process?
It's kind of like a drug dealer selling brain burning addictive substances to a few people on a street. Sure they're going to extract a person's life savings to date and whatever money that person can steal once they're addicted but that value pales in comparison to what that person could have made over their career, what it could have made if properly invested, the cost of law enforcement to deal with these addicts, the cost of the stuff that they destroy in their quest to get money to buy drugs, the opportunity cost of them not raising their kids to be productive members of society... like it all just snow balls all so some asshole can make a few bucks...
The ultra-wealthy are doing that shit where people burn acres of pristine forests to get some biochar -- but to the entire world.
Isn’t it strange
That princes and kings,
And clowns that caper
In sawdust rings,
And common people
Like you and me
Are builders for eternity?
Each is given a bag of tools,
A shapeless mass,
A book of rules;
And each must make-
Ere life is flown-
A stumbling block
Or a stepping stone.
Bloomberg Billionaires Index publishes daily net worth for the top 500 globally. As of end of 2025, the top 500 totaled $11.9 trillion, with $2.2 trillion added in 2025.
Forbes Real-Time Billionaires covers the full ~3,000-person list. The 2025 annual snapshot: 3,028 billionaires with combined net worth of $16.1 trillion
Forbes 400 (US only): 2025 cutoff was $3.8 billion to make the list. Forbes publishes the aggregate annually and recent years the total net worth was over $5.4T for the 400.
Well, the top 10% richest people control 67% of the wealth, and top 1% richest have 30% of the wealth in the US. The top half has > 97% of the wealth.
It appears you are the one very confused about wealth distribution in the US. Maybe you are confusing "income" with "wealth hoarding". The hoarding is happening to a gross amount, and this is why there should be a 1% tax on fortune portions over 100 million and 2% on portions over 1 billion. That and going back to the 70% tax over incomes in the top bracket (eg > 10million / yr)
Those taxes are coming. Trumpty Dumpty and the oligarchs brought it on themselves. Maga grifters are getting f'd in the midterms. Maybe maga should have picked a few dear leaders with some integrity instead of greedy frauds.
Their point is “billions” in securities representing the market capitalization of various organizations is not equivalent to purchasing power. The organization is not a silo full of energy, food, construction workers, and healthcare.
The “billions” are a constantly changing representation of what the average buyer in the market might be willing to pay at a certain point in time.
I really don't understand this "simping online for billionaires" hobby. Is there a signup I missed somewhere, where they pay $100 for every post one makes glazing and defending them as a class?
Well, I’m a billionaire, why would I vote against my own interests. I mean, yes, I’m currently a bit down on my luck (it’s embarrassing, to be honest), but I’m sure my net worth will move right up there with Elon’s very soon, and so it would be foolish for me to support taxes on the wealthy.
There was this silly 'weird nerd' meme about someone jumping in front of him every time someone fired a bullet at Elon Musk. This feels similar.
Billionaires are apparently what we should all aspire to, even though it is extremely hard to find any that got to where they are without getting their at the expense of others.
Yes, absolutely. But the billionaires will spend a fraction of their billions to stop that from happening so that's a lopsided fight already. The problem is the ratchet effect. Money is like gravity: have a little bit of it and you are attracted to the larger piles and become part of them. Have a lot of it and you start attracting more of it, even if you're not working. So once the balance is disturbed that ratchet effect makes it hard to lose money faster than you are making it.
Breaking that cycle will take some extraordinary effort and I suspect that the article gets at least that portion of it correctly. This isn't going to go away without a fight of some sort, whether a physical or a legal one is not all that important but since the billionaires have stacked the deck against the rest of us using their money in all ways except for the physical one that seems to be one of the few avenues still open.
And for how long it remains open is a question, there is a fair chance that AI will not only enable stable dictatorships but will also enable wealth extraction at a level that we have not seen before.
For instance: we are allowed to have this conversation by some billionaires. If they should decide you and I can no longer converse then that will be that and it is going to take a lot of effort to circumvent any blocks.
There are some 10 or so billionaires that can ruin my existence overnight, take away my means of living and that of those around me. And there wouldn't be much that I could do about it.
People have been radicalized over much less than this.
The wealthiest group in the US is the 70-95%, they have over double the wealth of "the billionaires".
But we can't talk about this because it includes a large tract of the white collar everyday man workforce.
This is why the focus is so heavy on billionaires, so heavy on increasing minimum wage, so heavy on protecting immigrants. Those are all virtuous values that also bolster the value of the 70-95%, while piling all the blame (and responsibility) on the 1%.
The wealthiest group in America is doing an excellent job at protecting (and growing) their wealth.
(for those wondering, the "back breaker" of this class is zoning laws and new housing, everyone is aware how intense NIMBYism is in the middle/upper middle class hives).
It's only logical when you view that wealth as a Scrooge Mcduck style vault of gold to raid, and are ignorant to the fact that it's almost entirely mark-to-market assets that have zero ability to buy food or pay for housing.
The middle class has the gold vault (well the closest thing), and that's where the redistribution would happen.
If you don't believe me, look at Europe. You can be a baker and make $35k yr, an SWE and make $65k yr, or a doctor and make $100k/yr.
You may say "Yeah, that's great, they live happy lives!"
But then convince American engineers they need to take a $140k paycut and the doctors a $220k paycut so that we can pay bakers $10k more a year. They'll just tell you the billionaires are the problem, and you'll believe them.
Yeah, and those people will be forced to liquidate their holdings (aka sell their houses in a market where most of the houses are for sale) to pay their share of the “wealth tax 2.0” after the “Billionaires” version fails to bring in enough money to pay for all the things promised.
Exactly. This is why I don’t support all the current wave of Democrats’ “wealth tax” policy ideas. There isn’t anywhere near enough money in billionaires pockets to keep the promises they’re making - especially once you account for them simply fleeing overseas, and also for even if you “catch” them, the downward pressure on their assets’ value that forced liquidation would have.
Once the Democrats who are elected on the fantasy of making Musk and Bezos pay for everyone’s past and future college/student loans, Medicare for all, UBI, high speed rail, while simultaneously closing every fossil fuel plant and subsidizing clean energy to replace it at the same cost — once they’ve failed to raise enough to pay for 1/10 of those promises, they’ll be coming for everyone more “wealthy” than $100k net worth.
You can just look how successful the USSR was, or China before they sold out their own Communist ideals. Most people were just subsistence farmers, or factory workers living in crowded minimalist apartments if they were lucky.
America isn’t the only country on earth, it’s just one of hundreds of others. That alone makes me confident about future not being even 1/10 as gloomy as some people think.
We have a lattice of diverse legal and economic systems in the world and it takes just a single one to figure out the solution for others to learn from.
America routinely ranks fairly low on the "happiest countries" rankings. Currently #24 behind most of Europe, with the Scandinavian countries typically at the top.
I do think there is a good chance that, in the not-so-distant future, universal basic income will become the norm. In previous industrial revolutions, large numbers of jobs were created to offset those that were lost. But there are very few things AI cannot perform faster and cheaper. Best case scenario, we will be in a world with both high productivity and high unemployment. Governments may have no choice but to provide universal income to everyone.
UBI requires a wealthy elite class to tax from that also does not capture the government and reduce or eliminate the UBI. The status quo shows us that if a wealthy class exists they will capture the government and eliminate benefits for the masses. Thats why minimum wage does not rise and as another commenter said we do not have universal healthcare.
> consider how much lobbying money is coming from CEOs and companies who know the domain best and are agitating for better financial and social safeguards for all.
To hear Marc Andreessen tell it, the US tech industry's rightward turn in the 2024 campaign was specifically intended to head off any attempt to regulate AI [0]. So the blame rebounds to tech CEOs even if you believe that only the government should take a holistic view of a given technology's impact.
Yes, our tech leaders would rather send america into fascism that have any impediments put in the way of their business plans. It is disgusting, and sad.
Lobbying is "directly advocating for or against particular legislation or regulations." Writing your representative is lobbying. Fighting bad legislation is lobbying. Any good faith attempt to argue a position on a government policy is lobbying.
Giving money to politicians or their campaigns is not lobbying, and it is already illegal for lobbyists to do so.
What could and should be made illegal is allowing unlimited political campaign donations via Super PACs. Political donations aren't lobbying.
It's worth being clear about what you actually want to make illegal because you probably don't want to ban anyone from arguing a political position.
isn't it an attempt to give structure to something that surely would have existed illegally otherwise? banning something doesn't automatically stop it.
Who is going to lobby to make it illegal? Our system is broken and won’t fix itself.
Inequality is going to continue to increase until society collapses. If we want a better world we need to prepare for this eventuality by building avenues of popular action to return power to the people. Once the oligarchs have fucked up enough people’s lives, popular action becomes a realistic way out of this mess.
For example, the people fighting inequality can use AI to their advantage, and focus criticism on billionaires (and general bad AI usage, like slop PRs) instead of ordinary AI users.
> Until people with billions of dollars behind them do something with that money…
Or until actual people take the billions of dollars sitting behind those weak man-children. The US has fewer than 1000 billionaires now, and more than 300,000,000 people. That seems like a solvable problem.
I suggest you keep going with that math. I'll use the numbers from here [0]. 924 billionaires with an overall wealth of 7.5 trillion. Split among 300 million people, that's about $25k for everyone.
Here are some points of consideration:
1. They don't have $7.5T in liquid. The average american won't be able to use that $25k to pay a hospital bill or eat. Also note that one-time wealth transfer won't even pay in full for one major surgery.
2. You've wiped away the incentive for getting-big mentality which drove some of the billionaires to innovate which advances society to this point. Think - discouraging a future Jobs from making another iPhone-like device.
3. After the one-time transfer, it turns out we need more money for the common folks. "Why is the line at $1B? Isn't $900m enough? The line should be $100m." And so on and so forth.
The problem with billionaires is they have a vastly disproportionate voice in the political system, which leads to ineffective politicians and policies not aligned with a thriving society.
eg: cutting funding to the IRS and advanced science, both of which have long proven positive dividends… or advancing new wars abroad to directly blow up money.
Plus wbillionaires are nothing special. Right time, right place.
Steve Jobs is a perfect example of someone who was in it for the love of the game. He wouldn’t have been any different if his income was taxed at 90%.
> 2. You've wiped away the incentive for getting-big mentality which drove some of the billionaires to innovate which advances society to this point. Think - discouraging a future Jobs from making another iPhone-like device.
Am I meant to believe that we wouldn't have iPhone-level innovation if inventors couldn't become billionaires?
This makes no sense. We have so much more innovation than we have billionaires, always have. Ability to become a member of the 0.001% is not a barrier to innovation, not in America, not anywhere, and never has been.
No one serious is claiming there should be zero wealth inequality. Inequality is ineradicable. The claim is that wealth inequality can reach a degree that becomes corrosive to society as a whole and severs the link between innovation and profit, because it becomes more profitable to hoard wealth and collect capital gains and interest than it does to innovate and create things in the real world.
It's entirely possible to preserve (and in fact would actually strengthen) the profit motive if we changed incentives to get rid of the wild capital hoarding we see today.
Literal money transfer is not the point. It's about power and concentration of it to insulate future consolidation of power.
Money is made up system to provide a relatively stable society; if that stops working it's not good; violence becomes what's left.
Maria Sam Antoinette and brethren saying let them eat cake (or everyone will just build new things with (our) AI) without a sense of what is happening / about to happen to the broader populous is on a similar track.
The "billionaires" should use their influence to help with this transition invest figuring out how these new system will work.
No one should care if that means more "millionaires" vs less billionaires these numbers as social constructs; the point is power and self determination. History shows lacking that for too many will breakdown to broad violence and or dystopic robot overloads guarding a diminishing small and isolated elite.
The power and influence (and damage caused) does not scale linearly with net worth. And you don’t need to have money on hand to be able to use it to harm others, you can e.g. use it as a collateral for loans and funding to build your child crushing machine.
Personally I wager society would be better if the excess wealth of billionaires was simply deleted, or burned. It would be better yet if that wealth was used in our shared funds to build common infrastructure and services. Leaving such wealth in such few hands is really the worst you could possibly do for society.
Why not just force them to to build the common infrastructure and services, and in exchange they get to keep the money? e.g. Jeff Bezos has to build some subway stations in NYC or something.
That way you get somebody with a proven track record of building big projects who is also motivated by money, so the common infrastructure and services is handled competently.
> Why not just force them to to build the common infrastructure and services, and in exchange they get to keep the money?
Because it is undemocratic, ripe for corruption and abuse, will never work in practice (as the rich will inevitably find ways to game the system). What you are describing is basically just aristocracy, where the rich get to decide what is best for the rest of us.
Ah yes. Let's trust civic engineering to a man who ran a company that had front-line workers using piss bottles to keep up with quotas. This cannot possibly end badly.
Uh-huh. It brings clarity to say you'd be happy to have the wealth destroyed. These are two different concepts, and the second one (about redistribution) always muddles these conversations.
1. Billionaires shouldn't wield lots of wealth, because it's scary.
Sticking to that concept makes the discussion a lot clearer. Never mind concept 2, it's haunted by the futile spirit of Marx and he's throwing crockery around.
Personally I am a fan of logistical taxation, where the mean income (including capital gains) pays 50% in tax and every standard deviation σ above (or below) pays extra (or less) according to 1 / (1 + e^-σ).
What will happen with this taxation is that if everybody makes the same income, then everybody pays 50% in tax. If some rich dude is making a lot more money then everybody else, they will lower the tax for everybody else while paying a lot more them selves. At some point (say 3 standard deviations above the mean) you end up getting less after taxes then had your income been lower (say 2 SD above), in other words, the limit is 100% tax for extremely high incomes (and 0% for extremely low incomes). That is, I favor a system that has maximum income, and you are actively punished for making more.
> 2. You've wiped away the incentive for getting-big mentality which drove some of the billionaires to innovate which advances society to this point. Think - discouraging a future Jobs from making another iPhone-like device.
In general, this is total bullshit. But in the particular, Job made his first billions from selling Pixar to Disney, not from Apple.
This distinction is good in academic circles and similar (like on here). But the public (and ordinary people who aren't people who regularly visit Hacker News -- or even know that Hacker News exists) don't care. To them, AI == inequality and inequality accelerants, because it is funded and run by the richest, most powerful people on Earth. And those very people are making everything worse for all but them, not better. Nobody is going to care about academic distinctions in such circumstances.
It's because the consequences of AI is so direct and obvious, and also faster, where the inequality and job losses from other tech advances are just less direct.
That is, it's not hard to see why so many main streets in smaller towns have boarded up retail stores since you can now get anything in about a day (max) from Amazon. But Amazon (and other Internet giants) always played at least semi-plausible lip service that they were a boon to small fry (see Amazon's FBA commercials, for example). But you've got folks like Altman and Amodei gleefully saying how AI will be able to do all the work of a huge portion of (mostly high paying) jobs.
So it's not surprising that people are more up in arms about AI. And frankly, I don't think it really matters. Anger against "the tech elite" has been bubbling up for a long time now, and AI now just provides the most obvious target.
Does economics or political theory focus on centralization, practically speaking? Not as a normative claim. What the actual effects are like. It just feels like we're at a centralization of power of unprecedented scale, to the point where no previous theories or models could really apply (in order to make analytical progress - I mean sure feudalism is honestly becoming a scarier and scarier analogy but still, there are significant differences)
I'm pretty much only thinking about these kinds of problems at my job at this point, so this is important to me in that regard
Here's an idea for how to do that: treat frontier AI as a sort of 'common carrier'. The only business that frontier AI labs are allowed to conduct is selling raw tokens - no UI. Thus, 'claude code' would have to come from some other company. This would segment the AI industry, and, maybe, prevent a single entity (or small number of entities) from capturing all value.
Sounds promising honestly. One of the scariest parts of the big AI labs is all of the exclusive training data they get through their UIs. (It’s unclear whether distillation is a feasible way to close the gap).
If there were another party involved, that would (hopefully) diversify power that (potentially) comes with those streams of data.
It’s a bit ironic that the USA has mostly abandoned interoperability after being one of the pioneers with the American manufacturing method. [0]
How can you hope for anything better if you consider it an us versus them situation? When they say "We don't want to increase inequality" and the response is "We don't believe you". Where do you go from there?
It seems like a lot of people want a revolution so that they can rotate who will be able to take advantage of the vulnerable.
What are the suggestions for something better? I don't see a lot.
I'd like to see more suggestions of how things could work.
For example:
The Government could legislate that any increase in profits that are attributable to the use of AI are taxed at 75%. It's still an advantage for a company to do it, but most of the gains go to the people. Most often, aggressive taxation like this is criticised on the basis that it will stifle growth, but this is an area where pretty much everyone is saying it's moving too quickly, that's just yet another positive effect.
> When they say "We don't want to increase inequality" and the response is "We don't believe you". Where do you go from there?
The response is "we don't believe you" because their actions show that they are hellbent on accelerating inequality using AI and they have offered absolutely no concrete plan or halfway convincing explanation of how, if their own predictions of AI's future capabilities are correct, we're supposed to go from here and now to a future that isn't extremely dark for the vast majority of humans on Earth (to the extent that said humans continue to exist).
The work they have done in this direction so far is not serious, so it's not taken seriously. They obviously care much more about enriching themselves than slowing or reversing current trends.
If they want to be taken seriously, maybe they should start acting like they're serious about anything besides their own wealth and power. And I do mean acting---they need to show us through their actions that they are serious.
Seriously. They can say they want to share their gains all they want, but I don't see them spending any lobbying money on things like universal income (and if Altman can afford to lobby for age verification laws he can certainly afford to lobby for things that actually benefit society). The reality is they don't lobby for anything that would take wealth away from them, and any redistribution of wealth (such as a s 75% tax rate) would by definition take wealth away from them.
You can, but then what? Do you judge what they say as if their perspective is the same as yours, and then conclude from that context that what they suggest could only come from an evil person. That seems to be what a lot of people do. What if they actually think what they are suggesting is the best thing for the world? How can you tell what is in their minds?
Alternately you could criticise their arguments instead of the people, and suggest an alternative.
I'm also not entirely certain that influencing public policy is something that is inherently bad. I know if I were deaf, I would like to have some influence on public policy about deafness issues.
The problem is that people have a million stories to explain the observed actions, most of those stories are bullshit, and people repeating them know fuck all about the decision-space in which these actions were chosen and taken.
The idea that we cannot possibly use people's actions to judge them is ridiculous. Musk thinks that the world would be a better place if the races were separated and if all charitable giving was ended. I think that's monstrous.
The billionaires could start to earn trust by lobbying for laws and programs that help the poor and displaced. Put money in to retraining programs to help people who lose their jobs. So far they seem to be doing the opposite, CEOs are publicly declaring ‘fuck you, got mine’ and leaving it at that.
Nick Hanauer has lobbied for higher minimum wages.
Michael Bloomberg has lobbied for healthcare.
Pierre Omidyar has spent about a billion on economic advancement non-profits
Gates Foundation - Bunch of stuff.
Warren Buffet - Too much to count
George Soros - For all the antisemitism, the kernel of truth in the lie is that he spends a lot of money trying to make the world better.
Chuck Feeny gave away $8B I'm sure some of it went to lobbying for better policies
A large number Advocate for a Universal Basic Income.
More advocate for things that they clearly think are good things for the world, even if you, personally do not.
Jack Dorsey, Reid Hoffman, hell even Elon Musk (he may be wrong about everything, but he's openly advocating for what he believes is good)
Sam Altman has done WorldCoin and is heavily invested in Nuclear Fusion. You can criticise the effectiveness or even the desirability of the projects, but they are definitely efforts that if worked as claimed would be beneficial.
Many billionaires spend money on non-profits to push for change, often they do not put their name on it because it makes them a target for attack, or simply that by openly advocating for something the lack of trust causes people to assume whatever they suggest has the opposite intention.
I'm not arguing that they are doing the right thing. I'm arguing that for the most part they are advocating for and investing in what they believe to be the right thing. Why treat them as the enemy, when a dialog might cause them to reach common ground about what is the right thing.
>Why treat them as the enemy, when a dialog might cause them to reach common ground about what is the right thing.
People like Elon literally are the enemy. He used his wealth to literally change our government in his favor. The idea that we need to go and have polite discussions to maybe change his mind, while he gets to stomp all over us (his DOGE efforts literally resulted in people dying). If a dialog with them was going to work it would have happened a long time ago, but the more we learn about these people the more obvious it is that they believe themselves to be smarter and better than the rest of us. They aren't going to listen to others, and pretending that they will seems like deflecting and giving up in advance. Our best hope is that people can get enough power to regulate billionaires out of existence before a revolution does it instead.
Please consider your biases. Musk could not have “changed” the government if the DNC didn’t hand it to Trump on a platter. Republicans took over because serious people had had enough with the DNC’s full-throated embrace of two things: race-based selection (with the unpopular Harris’s undemocratic coronation as the flagship example), and the relentless focus on trans ideology (to the point anyone not endorsing the fullest embrace of that idea has been declared equivalent to the worst racist). Without that, Democrats would have remained a powerful and relevant party and Musk would have gotten nothing he wanted.
>How can you hope for anything better if you consider it an us versus them situation?
Because it IS an us vs them situation.
They're awfully good at turning it into an us vs us situation whether it's blaming our parents' (boomers), blaming immigrants, blaming muslims or (their favorite), blaming the unstoppable forward march of technological progress (e.g. AI).
The media organizations they own are constantly telling these stories because it protects them.
>The Government could legislate that any increase in profits that are attributable to the use of AI are taxed
Nothing a billionaire loves more than misdirection and a good scapegoat. This is why Bill Gates made the exact suggestion you just did.
When THEY are the problem they love a bit of misdirection, especially when the "problem" is a genie that cant be put back in its bottle.
They're terrified that we might latch on to the solutions that actually work (i.e. tax them to within an inch of their life) and drive a populist politician to power which might actually enact them.
If I had the answer to that I would probably be a politician instead of a systems eng, but off the top of the mind build out a parallel economies at the state level where people in the US actually live, ensuring QoL standards, then gradually renegotiate up back to the Federal level. It would require, united..states eventually, but the general thrust is to shed corporate capture so that people see their government actually benefiting and providing them with tangible life improvements in real time.
This is interesting to see since on another HN post everyone is bemoaning how expensive it’s getting to use frontier models because Anthropic is massively throttling Pro Max Claude plans. That’s certainly not going to become more accessible to us normal folk through taxation.
AI is actually a mass decrease in inequality, as much as the Gutenberg printing press was. It takes something that used to be the foremost example of pure bourgeois and intellectual privilege - the culture contained within millions of books and other instances of human creativity - and provides it to everyone for the cost of a few thousand bucks in hardware and a few watts of electricity.
I can't think of any period in time where it was so easy to go into business yourself and to generally have access to the same "means of production" as the biggest companies have.
If you want to use LLMs, you can either use cloud resources at what I think are really reasonable per-token prices compared to the value, or to set up your own server with an open-weights model at a comparable level of quality (though generally significantly slower tokens/s). In any case, you absolutely don't have to pay OpenAI/Anthropic/Google if you don't want to.
I'm well aware of this: I bought a pretty beefy (consumer grade beefy) GPU machine and run all sorts of open weight models. I do think there is potential.
But are you expecting 360m Americans to start their own businesses? That is a solution that doesn't scale. Consumer grade GPUs aren't going to scale all that much either, and the cost of the models are going up rather than down as vendors start seeking profits. We already see the memory and storage markets exploding in cost due to the rise in demand as well.
Also: A handful more of already well-off people going into business for themselves is not going to move the needle on inequality. When people say "It's never been a better time to start your own business" they still mean "the people who already have their needs met and have the capital to live off for a while while their business becomes viable: In other words, the people who have always started businesses: Already-Rich people.
It's never been a worse time for the poor or middle class to think about starting their own business. Prices on everything are rising, it's getting to be a struggle for even the middle class to continue to afford their homes. Healthcare is even more fraught than ever before, and if you're lucky enough to have a decent plan from your employer, aint no way you're going to give it up to go start a business.
> But are you expecting 360m Americans to start their own businesses?
I do not. I grew up on post-scarcity utopias like Star Trek, coupled with social capitalism, and believe that when there is a market need, people with the interest to tackle it will do so, even in the face of personal financial risk, but I absolutely don't think that it should be the default for everyone. Where there's no strong economic benefit for others to work, I would hope that we could offer everyone UBI, so that a comfortable basic level of life is available for everyone, without having to invent bullshit jobs that aren't needed.
I know I sound naive, but I truly believe that we can move into a future where human value is decoupled from their job, without going into communism.
The answer to that question was the US before the 1970s when manufacturing was still onshored. So many joe shmoes literally started companies in this era taking some garage creation and manufacturing it at scale at a local plant.
Now that all takes place in China. With layers of middle men who collect arbitrage between you and the Chinese manufacturers they connect to you. With tariffs. Weeks of international shipping. Enough volume of orders to justify international shipping at all. Enough production capacity ordered to even be worth while making your thing versus larger orders from around the world all being made in china.
>
AI is actually a mass decrease in inequality, as much as the Gutenberg printing press was. It takes something that used to be the foremost example of pure bourgeois and intellectual privilege - the culture contained within millions of books and other instances of human creativity[.]
I would rather claim that this is a proper description of shadow libraries [1].
Yup. This is why if you claim to espouse literally any form of egalitarian political belief while being upset about (open source) generative AI, I know you're a fraud/charlatan/intellectual bankrupt/ontologically evil.
Huggingface, Swartz et al have done more social/political good for this world than billions have.
Swartz died in 2002, decades before LLMs. It is distasteful to put words in the mouths of the dead by invoking him here.
Even local AI concentrates power in the hands of a few, the few who can afford the hardware to run it, and the few who have the luxury of enough time and energy to devote to engaging with the intricate, technical rabbit hole of local models.
"Joel, you look like a smart kid. I'm going to tell you something I'm sure you'll understand. You're having fun now, right? Right, Joel? The time of your life. In a sluggish economy, never ever fuck with another man's livelihood."
People currently assume AI will be an accelerant of inequality because all currently useful models (i.e. those potentially capable of mass labor disruption) are only able to run in multibillion dollar datacenters, with all returns accruing disproportionately to the oligarchs who own said datacenters.
I'm not sure this moat is inevitably perpetual. It's likely computing technology evolves to the point of being able to run frontier-level models on our phones and laptops. It's also likely that with diminishing marginal returns, future datacenter-level models will not be dramatically more capable than future local models. In that case, the power of AI would be (almost) fully democratized, obviating any oligarchic concentration of power. Everyone would have equal access to the ultimate means of production.
> Everyone would have equal access to the ultimate means of production.
You are right that AI can be a fully democratized commodity. The problem is that the current wealth inequality is not the result of AI. Musk became a trillion seeking oligarch not because of AI. It is because the entire financial system is designed to extract wealth from everyone and concentrate at the top. Democratic AI is not in their interest. There will be violence, but not because AI is supposedly a catalyst of inequality. It will be violence from the rich towards the poor, because democratic AI is not acceptable for them.
>It will be violence from the rich towards the poor, because democratic AI is not acceptable for them.
Unless the rich somehow manage to completely stifle the progress of consumer-level computing advancement (all chip manufacturers would just collude to quit selling to consumers?) and exert an iron-fisted control over the dissemination of software (when has this ever worked?), I'm not sure how they could control the democratization of AI.
> It will be violence from the rich towards the poor, because democratic AI is not acceptable for them.
There's been ongoing class warfare happening for centuries, but only the rich side is firing the bullets. The rest of us are just standing in the front lines getting shot. AI is just another type of gun for their army.
So far, AI is a "unique" technology in that the main use case is "work replacement." Consumer applications have only existed to "destroy human creative media with low quality slop".
The vast majority of individuals derive no value from AI, they are instead told to do their jobs faster and own the mistakes of the AI for flat/declining pay. It's a bad deal for most people.
AI is not unique in this at all. It's also the goal of almost every technological advancement. The only difference with AI is that it's replacing jobs that people thought could never get replaced.
its always peoples fault. blaming technology is the shortest sight. people make it, and wittingly use it in a disagreeable way, because it earns them money.
there is something else that needs to change which everyone is reluctant to admit, or struggling with internally.
thats ok, its called conscious evolution. it hurts, but it will be ok someday. its generational, so progress is always slower than one would hope. Just know that every step in the right direction is one, even if the entire world seems to disagree keep pushing for what you beleive is right, and hopefully thats something which is not infringing on other peoples capacity to live a happy life.
My question to you is, are you willing to give up the tools of the oppressor in that pursuit of combatting the true villain of "gleefully taking away people's livelihoods"? What I mean is, yes, you are right, technically AI itself is not the problem. But it is the tool by which the oppressors are working their oppression.
Do you make this distinction that it's not the AI that is doing this to us so that you can be more clear in where to target your ire, or are you making the distinction so you can continue to use LLMs with a clear conscience?
I feel like the entire discourse is a proxy for what should be direct discourse about inequality and the regressive (rob from the poor, give to the rich) nature of our system.
Eliminate the AI variable entirely and the problem remains, therefore AI is not the problem.
You have it backwards. People are using billionaire owned AI, billionaire lobbying efforts gaming the system, and billionaire owned media as a propaganda arm for AI as a specific example of the larger general idea.
The PC revolution in the 1990s is one of the core drivers of inequality, where the rich took almost all of the dividends from the vast productivity gains from personal computers as the prime development of Moore's law rocketed computers from 66 MHz to over 8 gigahertz.
Judging by the gleeful texts of CEOs, collapsed hiring, internal policy changes and pushes, and the additional decades of centralized political control, it's clear this is going to be even worse..
Maybe it’s my own lived experience coloring my perspective, but man the author feels like a centrist sitting upon an imagined moral high ground. “Violence is bad but inevitable” is the kind of milquetoast non-committal position one takes when they have nothing else to contribute to the discussion at hand.
My own take goes that one step further, as I said in a prior comment rebutting Altman’s whinging blog post:
> Your staunch refusal to heed the critiques of those you harm means that these outcomes were inevitable; not acceptable, not justifiable, but inevitable nonetheless. In a society where two full-time working adults still cannot afford a home, or children, or healthcare, or education, your insistence upon robbing them of their ability to survive at all is tantamount to a direct threat of violence against them. Your insistence that the pain is necessary, that others must clean up the messes that you and your peers are willfully creating, is the sort of behavior expected from toddlers rather than statesmen.
The problem does not lie with technological innovation itself, so much as the powerful humans behind it leveraging it for selfish ends without the consent of the governed. Violence becomes inevitable when people see no alternative, and necessary when the stakes are kill or be killed, as AI is currently steered towards. That’s not to condone the actions of the alleged perpetrators so much as it’s highlighting the litany of historical examples around such transformations and the effects violence has in forcing a peaceful compromise in most (but not all) cases. The New Deal couldn’t have happened without the decades of preceding strikes, protests, and government-sanctioned violence against workers; the violence made it impossible to ignore or delay any further, and the result was outing corporate entities who had been stockpiling chemical weapons and machine guns, so fierce was their opposition to sharing the products of labor with the workforce. AI already has the weapons, it has the surveillance apparatus, the government backing; violence is presently the sole recourse left to a growing number of people, because they know they’re an obstacle to the powers that be - and will be destroyed, lest they strike first.
That’s the real story, here, and those who haven’t lived in the gutters of society cannot possibly understand the desperation of those victimized by it in the name of greed.
I like Tristan Harris' take on the situation, which is both more nuanced and more actionable. The idea being that the system and incentives are set up to select amoral technologists who will make money for shareholders, so inevitably the ones that come into power will be the ones don't see a problem with replacing all of human labor (because that's the only outcome that can justify the investment made). Reading Cory Doctorow's article from yesterday (https://pluralistic.net/2026/04/11/obvious-terrible-ideas/) was a poignant example of how the incentives are stacked against anyone with a conscience. The only solution, is political action, because the interests of the 99.9% are aligned here. And I say this as someone who loves technology and sees lots of value in AI, but it needs governance, and while in the past I was wary of government regulation in technology, in this case it's way broader and more existential to our civilization than one category of labor being disrupted.
That’s an excellent take that’s framed far better than my wordsmithing skills permit at present. Systemically, the incentives are there to maximize long-term harms for short-term gains, and the personalities who thrive in said systems are who currently run the very institutions who could change them. Absent a willful surrender of their agency to change the system in a way that would harm them in a limited financial way while improving the lives of everyone (themselves included), violence is, historically, the only way such toxic incentive schemes have been reformed.
I question how universal that is. There seems to be a meaningful difference between Altman and Amodei, for one. The Whatsapp founder was a decent guy as well, and I believe him when he claims to genuinely regret selling out. I'm sure there's more examples.
I think that framing at is "the system is set up this way" reads too passive. It reads as if it excuses the likes of Sam Altman, Mark Zuckerberg, Jeff Bezos, Peter Thiel, Larry Elisson among others being despicable sociopaths whose carnage inflicted upon society for pure selfish reasons needs to justifiably be treated as treason against society, with the obvious rightful consequence.
That's fair. Yes they are all individuals with their own unique perspective and approaches, and we should definitely hold them accountable for their impacts. I'm not saying the systemic incentives absolve them of responsibility, I'm just saying that we can not depend on CEOs of corporations to do the right thing. This is the role of government, but even moreso, elected representatives are people too, so actually it depends on a more fundamental movement of the people en masse to make it known to our representatives that this is way bigger than partisan politics.
Well said. It’s striking to me how many adults can’t conceive of “violence” as an abstraction that results in certain effects and fall back on “violence is dealing direct physical injury to a person’s body or building.”
Weirdly enough, I find that victims of violence who weren’t engaged in a greater act of violence (i.e., the domestic abuse victim versus a soldier in a conflict) are often the staunchest advocates for unwarranted harm towards others to preserve their personal sense of safety. They will carefully carve out a definition of violence that speaks to the specific harm they suffered and requires explicit physical action, and then use that qualifier to reject any other notions of violence.
A recent example is the domestic abuse victim in my complex who has setup private surveillance cameras in the indoor common areas that are heavily trafficked by other neighbors, none of whom have given their consent. She does not consider warrantless surveillance of others (or calling the police on those of us who do not wish to be surveilled in a secure area of the building by her personal cloud camera) to be a violent act, nor does she consider threats of calling the police on those who shield themselves from her camera’s view to be an act of violence.
Violence is not limited to physical actions that induce physical harm, it is any action intentionally designed to reduce the safety or security of others - physical, mental, fiscal, political, etc.
History has shown that an alien invasion can only happen because of the internal competition and in-fighting of the natives. Colonial empires proved it only a few centuries back. The invading alien powers are fuelled by the inviting natives.
AI (and computing technology in general) is an alien as it defies all wordly norms. It can have exact identical copies, can replicate, can exist everywhere, communicate across huge distance without time lapse, do huge work without time lapse, has no physical mass of it's own,, no respect for time, distance, mass and thinking work, not a living thing but can think.... Just the perfect alien creature qualities.
Why are they allowed to invade Earth? The business goals, of course. To get a temporary edge over the competitors, until they acquire the same. But once everyone has the same Ai, there is no going back. Ai has established itself through the weak channels that are filled with greed, that can bribed by giving toys (business edge), in return to the keys to the dominance of human race.
uhg this entire way of treating AI like a magical alien invasion is the problem, it just a statistical model, text-in, text-out (and it humans that feed the input and act on the output). Its not some alien invasion that can't be stopped, its just another technology that we as humans need to figure out how we want to use. Seriously people need to stop trying to anthropomorphize AI, because doing so is one of the biggest hurdles to practical/common-sense AI adoption IMO.
It is definitely not "just" a statistical model. It is inextricably linked to the datasets it is trained on. Datasets that these companies possess, but that ordinary people do not. That is one half of where they get their power (the training techniques being the other, but those tend to bubble out to the general public, or at least the interested public).
That is a choice a human made. Imagine if someone proposed sending the outputs of a random number generator to a space laser and had it fire at will, would we blame the number generator for the destruction it causes? You may say that LLMs are not random number generators, and I would somewhat agree, but at least in their current state and level of understanding we have about how they derive their output they might as well be.
So, imagine that some humans make this choice and then AI autonomously takes over and humans can't stop it anymore. Is that enough to treat AI in such a situation as a magical alien something that can threaten your or my survival?
One thing that the whole AI debate has shown to me is how many people completely lack any sort of imagation.
The initial Spanish conquest of the Inca empire by 168! Spaniards was not a question of disease as much a war of succession the Incas fought amongst themselves that Pizarro knew to exploit. Throw in horses, steel, and gunpowder and you have a one-sided affair.
Actually this is another good counterexample! As I recall, Incas lost battles against the Spaniards where they had something like 100x the numbers. It's true that they were initially divided, but they quickly united against the Spanish--and it didn't really help. The technological advantage was insurmountable.
How could it have been? It wasn’t like they had machine guns. In best case I believe it takes something like a full minute to reload a musket. Zerg rush would be sufficient tactics. 100 yard dash means your hoard of unarmed natives is through the musket range in maybe 10-15 seconds and pulling limbs off the spaniards already.
Why this wasn’t done is I think the big mystery and lends credence to the idea of spaniards having significant force numbers through allies.
This is not true of everywhere that was colonized. See Africa, or India. It would not be possible, even with very great tech advantage, to sustain millitary campaigns so far from europe without a safe port to base supplies etc, not to mention the manpower etc. These were very much made possible by what was essentially a standard playbook of allying with some natives against others, and using trade imbalance, violence, strongarming and other things to turn those "allies" into protectorates, and eventually colonies
Right. I am not saying diseases were a factor in every conquest. Just refuting parent saying that conquest is "only possible" through infighting. It's not - overwhelming technological advantage or disease are also sufficient even against a united culture.
Yeah. Basically conquest is possible when the victim is weakened. There are many ways to become weakened. Infighting and disease are common causes of weakening.
Wait, you think AI won’t eventually have full control over a bio lab, where it can manipulate an unsuspecting tech to produce and release a bioweapon to accomplish that explicit goal?
Because I think that seems virtually inevitable at this point.
Humans will give a slop machine control of a lab full of CRISPR machines because they think it might make them a dollar? It wouldn’t take Supreme Super Intelligence for that to go badly.
They don’t have to hand over control to lose control to AI. People are easily manipulated, and AI has proven itself able to manipulate people. How long until a tech is tricked or coerced into doing something dumb on a planet scale, based on intentional misinformation given by its apparently benevolent AI assistant?
>History has shown that an alien invasion can only happen because of the internal competition and in-fighting of the natives.
Not true. Overwhelming technological advantage also works. As Hilaire Belloc put it:
Whatever happens, we have got
The Maxim gun, and they have not.
The AI arms race is a race for that kind of advantage. Whoever wins (assuming they don't overshoot and trigger the "everybody dies" ending) becomes de-facto king of the world. Everybody else is livestock.
I used to think this, but the AI labs sure seem neck-and-neck in the model race. Doesn't appear that anyone is developing an enormous lead. So I've become skeptical of the runaway king-of-the-world-maker model scenario.
The open models seeming to be ~6 months behind is very encouraging, too.
AI progress can potentially be extremely non-linear because of feedback effects. The first to build an AI smart enough to accelerate building even smarter AIs wins (or loses along with everybody else if it's more successful than they expected).
People have said this, but so far if anything the opposite has been empirically true. OpenAI had a huge lead and it just didn't matter, Anthropic and Google both caught them and now they're neck and neck. It seems like compute overhang forecloses the possibility of runaway progress which eliminates all your competitors.
Any feedback process has a hard threshold for instability. The PA system doesn't howl until the microphone is close enough to the loudspeaker. The atomic bomb doesn't explode until the fissile material reaches critical mass. If you don't know where the threshold is you can't extrapolate.
Compute is a limiting factor now, but there have already been huge improvements in compute efficiency, e.g. mixture of experts. It seems extraordinarily unlikely that there are no more to be found. And compute capacity continues to increase too.
>The invading alien powers are fuelled by the inviting natives.
And the massive amounts of people (software engineers, lawyers, doctors, etc) currently being paid as contractors to help train the next AI models. They're essentially the inviting natives who are being paid in trifles to tell them the secret ways of the natives farther inland. Sucking out all of the tribal knowledge of the industry like a vacuum.
This would imply that evolution, which is also an arms race that disrupts and obsoletes the status quo, is due to some “weakness”.
AI doesn’t actually come from the outside.
The fact it’s economics have high winner-take-a-lot aspects, doesn’t mean you can eliminate the current winners and end up anywhere different, because it’s actually a natural decentralized progression of improving efficiency.
So that framing makes no sense.
However, the thesis for the potential for violence is sound. I don’t see a way out of that, given unending disruption, with no coordinated responsible response.
I do not think is this essay is hype.
This moment requires great leadership and competence, but that is not what is getting elected.
The last two decades patience with massive businesses scaling up profitable conflicts of interest, and centralizing gatekeeper and dependency powers, that offer no recourse to any individuals they mistreat, strongly suggest we are incapable of dealing with AI fallout. Which will only accelerate and add to those trends.
It reads like someone discovered analogies and decided they’re a substitute for thinking.
The entire argument lives and dies on one move: calling AI an “alien.” And it’s not even consistent. It starts with “alien” as in foreign invader, then quietly upgrades it to “space alien,” and from that point on everything just inherits whatever sci fi trait sounds dramatic. That’s not reasoning, that’s a word doing a costume change and dragging the argument along with it.
And honestly, the quality of comments on HN feels like it’s been tracking the broader decline in cognitive performance. The long running Flynn Effect has stalled or reversed in parts of the US. Some datasets show small but real drops in IQ related measures over the past decade. You read threads like this and it’s hard not to feel like you’re watching that play out in real time.
Highly recommend people learn the history of the Industrial Revolution. I recently discovered the Industrial Revolutions Podcast[1] and have been enjoying it. What's happening today isn't unprecedented. The pace of change that's happening IS similar to periods of the industrial revolution.
For example, the flying jenny, overnight, basically put an entire craft industry of weaving into question. Probably more dramatically than anything Claude Code ever did.
It took A LOT and several world wars for brief periods of normalcy post WW2 - probably the exception, not the rule.
“Nobody should learn about the collapse of societies, because my society is different”
“Nobody should learn about the history of tariffs, because rhe US in 2026
is unique”
It’s not either / or but shades of applicability to today. Things don’t just 100% apply or not apply.
We have a rich history of what happens when we automate labor. Even work people considered “knowledge work”. Each time was different. That doesn’t meant there’s not shared patterns to learn.
This is the key point. It threatens nearly everything in the limit, not one particular industry. There will be no "leveling up" into higher-order jobs, because the machines will be better at those too.
They thought that too in the industrial revolution. You can look back and see the jobs that came out of it. But at the time, it wasn't obvious to the people effected that there would be jobs again.
We may have hindsight bias in evaluating something that happened, but to the people that it happened to it was terrifying.
MIT's motto is mens et manus: mind and hand. These are, basically, the two primary attributes of human labor. They're the reason almost anyone gets hired to do anything. Our brains and our opposable thumbs are what set us ahead of the animal kingdom.
The industrial revolution first attempted to replace our hands. But the labor that was displaced had places to go: into smaller-scale manual work, where mass-production machinery was too expensive, and into knowledge work.
Now the AI is coming for knowledge work, and robots are getting better at small-scale work. We're not at that point yet, but looking down the road I'm not sure there will really be anything competitive left flesh-and-blood humans can offer to an employer.
The only exceptions I can think of are, maybe, athletics, live music performances, and escort services. But with only a few wealthy people as customers, I don't think there will be many job opportunities even in those fields. And I'm not sure that robots won't come for those too.
Nobody had any idea what was coming with the industrial revolution. There wasn't obviously other work for people. And for long periods of time nobody had an answer to that question for large percentage of the population.
In hindsight, we know the answers NOW, but then they did not know what was going to happen. We also don't know what's going to happen, it could go as you hypothesize. Or the Jevon's paradox people might be right and there's way more work to do.
The uncertainty is the historical lesson, not that "it'll all work out"
I guess the people in Wall-E didn't really seem unhappy so perhaps you're right. My gut instinct though, is that there is a qualitative difference in the level of abundance and concentration of wealth, power, and influence we have today that needs to be taken seriously on its merits and not hand-waved away with tenuous historical analogies.
Yes, two hundred years ago, many people thought reading was a dangerous distraction for young people, just as film, radio, TV and the internet became later. But there is a qualitative difference to having social media in your pocket with vibrating notifications. Pretending its just more of the same honestly feels like slightly willful blindness at this point.
Nah, it's obliterating the distinction -- made by middle class folks and only temporarily true -- between physical labour and intellectual labour.
You as a blue collar machine operator, shoving punch cards in and getting answers out, is precisely what your boss always saw you as, or wanted you to be.
Our necessity as pseudo-craftsmen holding an intellectual high ground and wizardly/magical skills was always resented by investors, owners, and sometimes customers.
Blacksmithing and leather tanning and shoe making and seamstressing and furniture making was human knowledge work, too.
The Alvin Toffler stuff was always bullshit, but it's even more bullshit now.
Thomas Picketty does indeed argue in Capital in the 21st Century that the post World War 2 period is indeed an exception in terms of inequality being lower while historically it is not, and it is reverting back to the mean of there being more inequality these days, yet people bemoan the idea of not being able to live off a single job when in reality that was never guaranteed.
Much as we'd like that to be true ideally, does it happen (in terms of inequality reducing)? I see no evidence of that, it ebbs and flows in various time periods and civilizations. One can try to resist that reversion to the mean but they'd historically be proven wrong.
For a start by not tearing down the systems that kept inequality in check in the past. Like union membership or banking regulation etc. just to name some examples.
I've been research Luddite movements around the world. Agreed that the topic is timely.
A closer comparison to Sam Altman might be Edmund Cartwright (inventor of the power loom that automated weaving). The Horsfall and Altman situations differ in that Horsfall was a factory owner but didn't create or organize the teams that built the stocking frames. There was also an attempt on Cartwright's life as he was out riding. But like Altman and unlike Horsfall, he wasn't killed.
A lot of the magic of LLMs, I think, has been tarnished by these CEOs and other FAANG companies. It might have been a far more interesting world if they didn't bring "AI" or "AGI" into the conversation in such a politicized way.
The power of the tool itself will be overshadowed by the motivations of its real owner. I can be both impressed by its ability to empower me, and be scared of the fact that the tools will change hands sooner or later and be deployed at scale to serve a goal I cannot, at minimum, support.
When most engineers and Marvel fans watched Tony Stark in Avengers collaborating with Jarvis they thought of Jarvis like "an AI with Google's knowledge where I can interact with him". It's true that we're close to that level interaction. However, the ultimate goal is to get as much as possible automated on Jarvis, to the point where Tony Stark is not needed or Tony Stark can be replaced by anyone with a mouth.
In this example, Jarvis isn't the goal but a checkpoint. The goal is a genie, providing software and research to anyone who is loaded with money, and knows how to rub the metaphorical lamp the right way.
> The power of the tool itself will be overshadowed by the motivations of its real owner.
Not only that, but by how blatantly and openly these owners are discussing the tool's power. They are publicly crooning about their product's ability to replace workers. It's the first line of their sales pitch. And also, their customers (business CEOs) are publicly crooning about how awesome it is that they can reduce their headcount! Both the AI producers and their customers are absolutely bragging about worker displacement, and not a single guillotine has been constructed in the streets yet.
> the tools will change hands sooner or later and be deployed at scale to serve a goal I cannot, at minimum, support
Personally, the tools don't need to change hands at all. They are already in the hands of people who are deploying them at a scale to serve goals I cannot and do not support
The people running AI companies right now are some of the most evil motherfuckers on the planet
It'd be nice if they didn't use the term at all because I don't think they're useful relevant or real.
If we thought of all of this as 'stochastic data systems' then our heads would be in the right place as we thought about it just as 'powerful software' that can be used for good or bad purposes, and the negative externalizes will be derived from our use of it, not some inherent property.
On the other hand, "magical new systems that provide almost unlimited capacity for intelligent work" is probably a more functional mental model. Genie can give you 1000 wishes till you reach your session limit.
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." - Reverend Mother Gaius Helen Mohiam, Dune
It’s the inevitable result of valuations based on hype and future potential, not business fundamentals. It incentivizes companies to be as hyperbolic as possible with their pitches and marketing.
Cryptocurrency is an interesting technology with some niche use cases, but it was pitched as replacing the entire money system. LLMs are extremely useful for certain types of work, but are pitched as AGI ending all work. Etc.
Magic or no, ultimately "AI" leads to labour displacement and it's just a continuation of the much broader trend of automation driven by computers.
Labour displacement leads to an erosion of standards of living and in a world that ties purpose to work is an existential threat on a very practical level.
It was always going to be met with violence once it became more than a curiosity for tinkerers.
b) Watch as the value of human life rapidly approaches zero.
---
Though I'd expand this by adding "technically alive" is not a very good standard to aim for. Ostensibly we're already heading for something like poverty level UBI + living in pod + eating the proverbial bugs. We need a level above that!
A great exploration of the pitfalls of "preserve humanity" as a reward function is the video game SOMA. I think you also need "preserve dignity" to make the life actually worth living.
(Path `a` is not without its pitfalls: what lack of survival pressure might do to the human culture and genome, I leave as an exercise for the reader! But path `b` I think we already have enough examples of, to know better...)
>Labour displacement leads to an erosion of standards of living
The two biggest labor displacements in human history were the agricultural and industrial revolutions, both of which resulted in enormous gains in human living standards. Can you think of a mass labor displacement that resulted in an overall erosion of living standards? I cannot.
The agricultural and industrial revolutions "weren't labor displacement", they were technological and social changes that happened unevenly and gradually in time and space and which resulted in labor displacement, but they were not the only cause, and they didn't happen BECAUSE of labor displacement. I would argue the subsequent labor displacement caused a minor part of the social gains to be later distributed and realized through class struggle, but that's beside the point. Most wars cause mass labor displacement and military technological advancements that later translate into society as a whole. Are you prepared to argue for wars? If you are American, you are experiencing firsthand the effects of what once was a major part of your industrial labor being absorbed by China. It has led to massive inequality and erosion of standards of living in the US. Not so much for the Chinese working class, which has increasingly improved their standards of living. Are you going to argue for it? I think if we only look at things from a limited perspective, and in this instance a technocratic and teleologic view of history, as in history has a designed finality and this finality will be achieved through unrestrained development of production forces, you are bound to quietly take part in the destruction of society and nature, now viewed as externalities, and accept the worst of atrocities in the name of "advancement", while most of any gains are captured in the short term by a minority.
AI is different. It promises to be able to do everything humans can, but better and more cheaply. When AIs can do every human job cheaper than the subsistence cost of employing a human, humans will be economically obsolete and worthless.
Then there's the minor issue of AI deciding to just wipe us out because we're in the way.
Taking everything together, AI more powerful than that which currently exists must not be created. This needs to be enforced with an international treaty, nuking data centers in non-compliant states if need be.
Before the industrial revolution, approximately 90% of people worked in agriculture. In fully industrialized countries, that figure is now <2%. That decrease constituted a nearly full replacement of everything humans were doing, better and more cheaply. While this time might be different, I don't think this is a given.
Maybe it’s not a given, but it is part of the sales pitch for CEOs. A few others have announced layoffs due to AI being better and more efficient than humans.
How much truth there is to it we don’t know for sure. But it’s not something to be ignored.
CEOs have been saying the exact same thing for the entire history of automation. Take computing, for example, an industry that's always been unusually amenable to automation:
— in the 1960/1970s, when compilers came out. "We don't need so many programmers hand-writing assembly anymore." Remember, COBOL (COmmon Business-Oriented Language) and FORTRAN (FORmula TRANslator) were marketed as human-readable languages that would let business professionals/scientists no longer be reliant on dedicated specialist programmers.
— in the 1980s/1990s, when higher-level languages came out. "C++ and Java mean we don't need an army of low-level C developers spending most of their effort manually managing memory, and rich standard libraries mean they don't have to continuously reimplement common data structures from scratch."
— in the 1990s/2000s, when frameworks came out. "These things are basically plug-and-play, now one full-stack developer can replace a dedicated sysadmin, backend engineer, database engineer, and frontend engineer."
While all of these statements are superficially true, the result was that the world produced more software (and developer jobs) than ever, as each level of abstraction freed developers from having to worry about lower-level problems and instead focus on higher-level solutions. Mel's intellect was freed from having to optimize the position of the memory drum [0] to allow him to focus on optimizing the higher-level logic/algorithms of the problem he's solving. As a result, software has become both more complex but also much more capable, and thus much more common.
While this time with AI may truly be different, I'm not holding my breath.
> humans will be economically obsolete and worthless
Only if we are talking about a socialist system (and they are making pretty small progress in the field of AI). A human's value under a capitalist system is equal to their ability to create goods and services. And AI cannot make this ability smaller in any way.
A people's well-being is literally the goods and services created by that people. How can it decrease if the people's ability to produce those goods and services is not hindered in any way?
So, when it comes to the entire nation benefiting from AI, the most important thing is to preserve capitalism, and then the free market will distribute all the benefits. The main danger is a descent into socialism, with all these basic incomes, taxation out of production, and other practices that would lead to people being declared economically obsolete and mass executed to optimize their carbon footprint or something.
> A human's value under a capitalist system is equal to their ability to create goods and services. And AI cannot make this ability smaller in any way.
Yes they can. Your ability to produce goods and services depends on the infrastructure around you. When that's all run by AIs for AIs, humans won't be able to compete.
See that land over there producing food you need to eat? It turns out it's more economically efficient to pave it over with data centers etc.
Under a US-style capitalist system the rich (i.e. the AIs and AI-run businesses) control politics, the courts, etc, so the decisions the system makes will favour AIs over humans.
> So, when it comes to the entire nation benefiting from AI, the most important thing is to preserve capitalism, and then the free market will distribute all the benefits
...to the AI-run companies!
> The main danger is a descent into socialism, with all these basic incomes
Without UBI most people (or maybe everyone) would starve.
Yeah, and who is creating those infrastructure? Jesus? This is the same part of goods and services.
> When that's all run by AIs for AIs, humans won't be able to compete.
So what? The ability to produce goods and services (and therefore general well-being) will not decrease because of that.
> It turns out it's more economically efficient to pave it over with data centers etc
By the way, a good argument against your position. Agricultural land is very cheap, but the vast majority of people who believe AI will put people out of work and worsen overall well-being are for some reason reluctant to buy this asset, which would see a catastrophic increase in value under such a scenario. So these people are either incapable of analyzing the economic processes, and their predictions are worthless, or they don’t really believe in such a scenario.
> will favour AIs over humans
Let me repeat: it does not reduce the ability to create goods and services. Under capitalism, this is the only characteristic that determines people's well-being.
> ...to the AI-run companies!
I think this is a fairly unlikely scenario. But even in this very unlikely case, people's well-being will not be reduced. Simply because of the mechanisms of creating well-being.
> Without UBI most people (or maybe everyone) would starve.
Economic theory (and 20th-century economic practice) demonstrates the exact opposite. In every country that attempted to effectively implement UBI, it led to a sharp decline in production and mass starvation. Literally every single time.
> in a world that ties purpose to work is an existential threat on a very practical level.
I don't disagree that we tie purpose to work and severing that tie will have negative societal consequences, but it is far more impactful that we tie the ability to continue to exist to work (for anyone not lucky enough to already be wealthy).
If I suddenly became unemployable tomorrow I'm positive I could find alternate purpose in my life to fill that gap, I already volunteer for various causes and could happily do more of the same to fill in the gaps left by lack of work. What I couldn't do is feed myself, keep myself housed, and get medical care (especially in the US, where this is very directly tied to work).
The really big fuckup we are committing as a society in the US (may or may not apply to each person's country individually) isn't just this looming threat of massive labor displacement due to AI, it is that instead of planning for any sort of soft landing we are continually slashing what few social safety nets already exist. We are creating the conditions for desperation that likely will result in increasing violence as outlined in the linked post.
> The really big fuckup we are committing as a society in the US (may or may not apply to each person's country individually) isn't just this looming threat of massive labor displacement due to AI, it is that instead of planning for any sort of soft landing we are continually slashing what few social safety nets already exist.
Think of the alternative, though: If we planned for a soft landing and implemented safety nets and started transitioning ourselves to a society where people didn't have to work to survive, then a few trillion dollar companies would make slightly less profit every year. We simply cannot allow that. Won't someone think of those trillion dollar companies for a minute?
That's a truism. But it ignores The Iron Law of Oligarchy, Pareto Principle, and dozens more that remind us that power tends towards centralization. It's currently fashionable to call out the billionaires, but if you removed them, they'd just be replaced by corrupt government officials, or something else.
That's not to say we should just throw up our hands and accept every social injustice. But IMHO we shouldn't go around simplistically implying that all social ills will be solved by neutering the billionaire class.
More importantly we shouldn't deny the rest of humanity benefits on the basis that the majority of the benefit accrues to the powerful. We should strive to change the distribution pattern, not remove the benefit.
Right, giving up is actually how these things end up becoming principles/laws. Power centralizes because people become complacent and ignorant on matters of power, so there ends up being a power vacuum, to which others seize the opportunity. But absolute power centralization almost never occurs, due to the delegation that is necessary to wield that power in practice, and so these two forces end up balancing each other. As such, the equilibrium point (or point of maximum entropy) ends up being some type of oligarchy. But anyone can take steps to address this and adjust this equilibrium point, but it takes active work.
The problem with billionaires is that they are able to hoard so much money by exploiting others. We would be much better off if billionaires weren't given so much advantage by Capitalism as those resources would be much more useful if distributed.
The biggest problem we currently have with billionaires is that they are now so rich that the world becomes like a game to them and some of them are deliberately pushing us to a dystopia where non-billionaires become functional slaves (c.f. Amazon workers).
Unfortunately, this is the only way to get enough venture capital to support the compute needs for this kind of technology. Who is going to spend hundreds on billions on a vague idea without regular claims that this will upend the existing economy in six to twelve months and whoever owns it will become unfathomably rich? And despite all the actual developments we have seen going against that idea, investors keep falling for it. This will continue until it crashes, one way or another. The question is how long it can build up and how deep the fall will be. LLMs will certainly change the economy in the end, but so did mortgage backed securities.
It's a sad indictment of our society that there is always a shortage of money for medical care, infrastructure, housing, food stamps and space exploration but always a surplus of cash for war and tools that purport to replace the workforce.
There will always be a shortage of money for medical care. The dirty secret of social medicine is that a small percentage of the population are essentially unhappy utility monsters [1] who gain little or no benefit no matter how many resources are poured into treating them.
> It's a sad indictment of our society that there is always a shortage of money for medical care...
It has nothing to do with society; there is infinite demand for medical care. The upper limit is whatever it takes to live until the universe's heat death in good health. That takes a lot of resources.
However much society spends on medical care, there is always more that could be spent. The modern era has the best, most affordable medical care in history and people are showing no signs of being satisfied at all.
While war spending generally just causes pain for no gain it doesn't change the fact that there will never be enough available to satisfy people's demand for medical care. Every single time people get what they want they just come up with a new aspirational minimum standard.
There isn’t really a shortage of money for those things, just rampant levels of fraud, corruption, and incompetence in the government to make those things artificially expensive. California spends so much money on high speed rail and gets 0 feet of track because they’re not paying for track; the whole thing is a scam where the politicians give taxpayer money to their political supporters in exchange for political support. Defense isn’t immune to this either; Boeing, which builds a shitty heavy lift rocket out of Space Shuttle spare parts and delivers it late and over budget, pulls the exact same bullshit with their defense contracts, and there’s always some shitty Senator siding with them against the American people whenever anyone gets upset.
The current British government should be a shining beacon for you! Its welfare bill actually outstrips national income by far. Britain's pathetic defense capabilities cannot even see off Russian warships that intimidate by deliberately hanging around British waters assessing our vital undersea cabling. The UK government has now asked France if it can help deter these ships. Tangentially - I should add that even with their massive expenditure on the National Health System (NHS) it's not enough and too many people feel that they have to go abroad to get life-saving operations and procedures. If they can afford it of course. But sure, that is another matter. As far as I can tell, there seems to be pretty much an apolitical consensus on both areas.
So did compassion, probably in a greater amount. And yet the greater amount of resources goes into war at the expense of compassion.
Humanity has taken control of its own evolution and no longer relyies on natural selection to be the driving force for change. Using evolution as an excuse to make bad and immoral choices is a poor argument and should be left back in the stone age.
Yes, the social darwinist approach inevitably lead to eugenical thinking and the human meat grinder that follows. We, as being with the capacity to understand harmful v. non-harmful behaviour, have a consequence to harmful behaviour, collectively: human suffering and the suppression of freedom.
I don't want to stir up the hornet's nest here, but in my humble opinion the entire problem rests on the unabated and unchecked modern and "late-stage" capitalism model, championed by the U.S. and since exported to and sprung good root everywhere else, even in Europe where it as of yet has a few more checks and balances (which unsurprisingly draws a lot of ire from its acolytes and priests across the Atlantic).
Soviet Union lost due to an inferior societal model, but this too is too much along what once was a relatively sustainable path. The American dream is now a parody of itself, as it takes more to end up with the rest of them, I could go on about the irony of wanting to escape the pit but not wanting to acknowledge the pit is the 99% of the U.S. -- Not Altmans, Bezos'es, Musks or Trumps or their hordes of peripheral elites.
Point being, the model doesn't work _today_ with its cancerous appetite and correspondingly absurd neglect of the human, _any_ human. We can't have humanism and the kind of AI we're about to "enjoy".
The acceleration of wealth disparity may prove to be nearly geometrical, as the common man is further stripped of any capacity to inflict change on the "system". I hope I am wrong, but for all their crimes, anarchy and in a twist of irony -- inhumane treatment of opponent -- the October revolutionaries in Russia, yes bolsheviks, were merely a natural response to a similar atmosphere in Russia at the turn of the previous century. It's just that they didn't have mass surveillance used against them in the same capacity our gadgets allow the "governments" today, nor were they aided by AI which is _also_ something that can be used against an entire slice of populace (a perfect application of general principles put in action). So although the situation may become similar, we're increasingly in no position to change it. The difference may be counted in _generations_, as in it will take multiple generations to dismantle the power structures we allow be put in place now, with Altmans etc. These people may not be evil, but history proves they only have to be short-sighted enough for evil to take root and thrive.
Sorry for the wall of text, but I do agree with the point of the blog post in a way -- demanding people become civilised and refrain from throwing eggs (or Molotovs) on celebrities that are about to swing _entire governments_, is not seeing the forest for the trees.
There's also no precedent in a way -- our historical cataclysms we have created ourselves, have been on a smaller scale, so we're spiraling outwards and not all of the tools we think we have, are going to have the effect required in order to enact the change we want. In the worst case, of course.
I think a lot of HN readers and a lot of first world/law abiding dwelllers in this and recent threads forget to think.
Violence is not a panacea, but often, the outlet.
Yes we all (majority of sane) people know that violence is not the answer yada yada yada. Doesn’t matter. It will happen anyway. Saying “it shouldn’t happen, it does not solve X” will not stop it to becoming an outlet for frustrated people.
Actually violence is the ultimate power. It is where true power comes from — you can gain true power by hurting other people or/and benefiting other people, and it is always the power to hurt people that is the greater of the two.
A well run government wraps violence behind a curtain and jealously guard it. For example most modern governments look down and punish private vendetta because the state is only the one that can hurt people legally. But if the people believe that the government is biased or don’t care about them, then they will resort to violence, the ultimate power.
Given enough people enough guns and school shootings are inevitable.
Allow a handful of people that grab the economy and all means of production and violence will be the result.
At this point in time it is simply cause and effect, the surprising thing to me is how long it is holding together. But at the rate the economy is being wrecked I fail to see how it will do so for much longer.
Effectively the French elites started the French revolution by being a little bit more greedy than the population would have tolerated. That set off an avalanche of what were effectively a series of mini revolutions ultimately resulting in modern France, which is in many ways unlike any other country in the world. The United States had its war of independence (aided by France, by the way), and then its civil war. But it never had a class war - yet - and this article presages that class war.
It could well be that the small number of rich people that are currently effectively a government outside of the government genuinely believe that their wealth and power insulate them from the consequences of pushing their greed and wealthy to ridiculous levels. But I suspect the author is right in that this is approaching some kind of threshold and I have no way of seeing across the divide, I'm hoping for another France rather than another Somalia.
This is why a healthy democracy is important. It helps act as a pressure release for problems that historically resulted in violence. Democracy in the US in particular is in a major backslide, and it's not alarmist to predict that violence will increase in the coming years.
> Violence is not a panacea, but often, the outlet.
This couldn't be further from the truth.
History demonstrates categorically that violence is the last and most reliable form of recourse available to the disempowered, once society has trended too far towards either an excess of freedom or an excess of equality. And, in fact, our position in that balance between freedom and equality is perpetually oscillating, tending to finally reverse direction only in response to violent revolt.
This cycle has repeated over and over, essentially since the dawn of civilization. This was among the most important insights of 'The Lessons of History' by Will and Ariel Durant. And it's baked on two very simple insights about human nature: (1) those in power rarely give it up willingly (they often do the opposite) and (2) fear, on average, is and always will be a far stronger motivator than appeals to a person's conscience.
> U.S.-based rights group HRANA said 3,636 people have been killed since the war erupted. It said 1,701 of those were civilians, including at least 254 children.
(Mentioning this specifically because we know the DoD is using AI)
Better kill the same civilian population they did, as perverse punishment, then? We have to kill them, or else Iran will kill them? The logic of this war doesn’t.
...according to two anonymous government officials.
Coincidentally that's literally the exact same evidence cited to prove the existence of Saddam's WMDs just before launching an entirely different unprovoked attack.
That was just an unhappy mistake though, this time it's totally legit.
There's plenty of evidence that it's tens of thousands, but it's absurd to even argue over those numbers when a government massacring any number of its own citizens is morally reprehensible (whether it's 5k or 50k). Iran has a long history of executing its own citizens en masse.
Iran has admitted outright to 6k deaths, by the way.
I was thinking about this, if the deaths were actually at the scale of 10's of thousands, would that not be visible from space?
The US must have several dozen spy satellites pointed at Iran. We get various imagery to show us successful strikes. Where are the images of the mass slaughter in the street?
The number I keep seeing is 30k killed. That's not an easy endeavor over the course of a week without big logistical hurdles. The trucks, the digging equipment, the furnaces to burn the bodies, all should have some visible trace that the US gov could point to as proof.
Like I did see a handful of videos with body bags lining the street, but it couldn't have been more than 1-2 dozen. I've seen videos and satellite images of actual mass graves in gaza, fields full of bodies covered in sheets. Endless videos of toddlers with amputated limbs.
We have many videos of protests in Iran even though they shut down internet, but somehow we have no videos of mass killings or even small scale murders.
I have said repeatedly that when AI eliminates the need for human creativity and work, the only thing left as the natural domain of humans will be bloodshed.
The fact that we're using AI killer robots to wipe each other out in droves doesn't bode well for that future does it...
I think you underestimate just how much we value human achievement.
Why do we watch Olympic runners, when cars on your average city street easily exceed Usain Bolt's top speed on their morning drive to Starbucks? Why do we watch the Tour de France, when we can watch Uber Eats drivers on their 150cc scooters easily outpace top cyclists? I'm sure within a couple years a Boston Dynamics robot will be able to out-gymnast Simone Biles or out-skate Surya Bonaly. Would anyone watch these robots in competition? I doubt it. We watch Bolt, Biles, and Bonaly compete because their performance represents a profound confluence of human effort and talent. It is a celebration of human achievement, even though that achievement objectively pales in comparison to what our machines can accomplish.
I think the same is true for other aspects of human creativity and labor. As we are able to automate more and more, we will place increasing importance on what inherently cannot be automated: celebration of our fellow humanity. Another poster wrote that "bullshit jobs" [0] exist primarily because we value human contact [1]. I am inclined to agree.
When chess engines started becoming really good, some people worried that competitive chess would die. Today, grandmasters stand no chance against a smartphone, and yet, chess popularity is at an all time high.
I respectfully disagree with this statement in the sense that if the whole world ends up becoming like a chess tournament. It would become insanely more harder for us to live our lives peacefully. The life of a chess player is filled with stress.
(https://news.ycombinator.com/item?id=47587863) A comment I had written sometime ago. Aside from a very few at the top, I have seen some chess players regret in a very nostalgic way.
The chess industry continues to allege against each other and we lost a star (Rest in peace, Daniel Naroditsky) because of it. The current world champion himself is struggling from all the pressure put on a 19 year old boy.
We enjoy playing against each other but man it is competitive if you wish to feed families.
Most of us play chess out of leisure. I am unsure how a world where everyone does something akin to chess competitively (ie. for money, as we wish to feed our children and ourselves) would look like.
One can say something similar to UBI might be needed and then we all play chess in leisure, but I don't think that is what most people propose when they mention the example of chess.
> Why do we watch Olympic runners, when cars on your average city street easily exceed Usain Bolt's top speed on their morning drive to Starbucks? Why do we watch the Tour de France, when we can watch Uber Eats drivers on their 150cc scooters easily outpace top cyclists? I'm sure within a couple years a Boston Dynamics robot will be able to out-gymnast Simone Biles or out-skate Surya Bonaly.
Big sports events are the "circenses" part of "panem et circenses" [1]. Fun fact concerning this: the German word for "entertainment" is "Unterhaltung"; thus it can be argued that the purpose of entertainment/Unterhaltung is "unten halten" (to keep at the bottom), i.e. to keep the mass of the populace at the bottom, or in other words: to prevent the mass of the populace from coming up.
> Would anyone watch these robots in competition?
I have seen robot fight competitions both live and in videos, and I have to admit that these are not boring to watch.
So yes, with a proper marketing I can easily imagine that lots of people would love to see broadcasts of some robot competitions.
I'm not really into either F1 or Nascar, but my impression from the outside is that those sports are still primarily about the drivers
F1 is somewhat about which company can build a better car. But any real improvements seem to invariably lead to a rule change that bans that improvement in future seasons. So you are back to drivers being the most visible differentiator
I'd say JS Bach was one of the fruits of our labor, so were Newton, Einstein and van Gogh.
Olympic Athletes are a combination of luck in the genetics department and a lot of effort, but ultimately do not seem to be sufficient to help the athletes themselves.
> when AI eliminates the need for human creativity
We haven't needed the overwhelming majority of human creativity. We still paint and play guitar even though it has no economic value. I think we'll continue to do these things regardless of AI.
.. sure, there are still people with newspaper subscriptions, or DSLR cameras. But it's become a niche market. Those things have been replaced by your phone and a "free" service.
Same thing will happen for all the other markets that AI will gradually eat. Sure, you can find a human that can do better. But that costs 90$ / hour and requires finding someone, negotiating a contract, etc. But when people can do something good enough in 30 seconds with something they already have access to, and move on with their life, then that's what they'll do.
So just raising the floor will have a big effect on society.
Nothing will ever eliminate the need for those things, people work today for MONEY. If technology eliminates scarcity thats a good thing, it's the hoarding of wealth that causes bloodshed.
At least for now, AI sucks at creativity. There is an initial "wow" effect when you can generate an image of an astronaut riding a unicorn on the moon with a simple prompt, but as you try to play a bit more with it, you notice that unless you inject some of your own creativity, you won't get very far, no matter the medium.
Passed some point, if you are good at what you are doing, the AI will stop helping and become a burden, because you will want precise control, and AI in its current form (deep learning) is not good at it.
There is a reason we talk about "AI slop", you simply cannot let an AI make creative decisions and expect a good result.
By creative I don't just mean artistic. For code, AI works for the least creative tasks, like ports, generic-looking CRUD apps, etc...
As for work, we have already eliminated most of the need for human work. By "need", I mean survival: food, shelter, these kinds of thing. Most of human production goes to comfort, entertainment, luxury, etc... We will find stuff to do that isn't bloodshed. In fact, as times went on, we spend more on saving people than killing them, judging by a global increase in life expectancy. Why would AI reverse the trend?
> I have said repeatedly that when AI eliminates the need for human creativity and work
Yeah, this is not happening anytime soon. Have you even looked at AI-generated code or text? AI is just a dumb parrot, it's no match for human effort and creativity even in these "easy" domains.
The business case for AI generation is just being able to generate huge amounts of unusable slop for next to nothing. For skilled workers it's a minor advantage in that they get a sloppy first draft that they can start the real work on - it makes their work a bit more creative than it used to be, by getting rid of the most tedious stuff.
You really need to look again. If you're still manually writing code you have your head in the sand.
AI can produce better code than most devs produce. This is true for easy stuff like crud apps and even more true for harder problems that require knowledge of external domains.
I'm not sure about other devs, or even their number, but AI can most definitely NOT produce better code than I can.
I use it after I have done the hard architectural work: defining complex types and interfaces, figuring out code organization, solving thorny issues. When these are done, it's now time to hand over to the agent to apply stuff everywhere following my patterns. And even there SOTA model like Opus make silly mistakes, you need to watch them carefully. Sometimes it loses track of the big picture.
I also use them to check my code and to write bash scripts. They are useful for all these.
What you're describing is using it to do something you already can do at an expert level, and you already know exactly what you want the result to look like amd won't accept anything that deviates from what's already in your head. So like a code autocomplete. You don't really want the "intelligence" part, you want a mule.
That's fine, and useful, but you're really putting a ceiling on it's potential. Try using it for something that you aren't already an expert in. That's where most devs live.
Even expert coder antirez says "writing the code yourself is no longer sensible".
AFAIU antirez is mostly writing in C, a verbose language where "create a hashtable of x->y" turns into a wall of boilerplate. In high level languages the length diffrence between a precise specification and the actual code is much smaller.
He also mentions using it for Python which is minimal boilerplate.
And he didn't limit his take to just C code. He said: state of the art LLMs are able to complete large subtasks or medium size projects alone, almost unassisted, given a good set of hints about what the end result should be.
But if the using them as mules is still producing silly mistakes, how will I have the confidence to defer to their intelligence for much more complex stuff?
These things bullshit their way about all the time. I've lost track of how many times they seem to produce something great, only for me, upon deeper inspect, to see what a subtle mess they have made. And when the work is a bit complex, I cannot verify on sight; I'd have to take time to do it.
Also, they absolutely cannot even produce some levels of code. Do you think I can just give them a prompt to produce a haskell-like language, allow them to crank for some hours, and have a language ready made?
Want an example? here is something Sonnet gave me just today:
I get this as the type of xx: Promise<Result<Pick<Cabinet, "name">[]>>
Which is obviously wrong. I should be getting the full type, i.e., all columns picked. The problem is that the Column generic parameter is not being properly inferred, which is (probably) due to the sorting by name, since the sort column is defined to have to be part of the query field name, so when field is not provided, TypeScript infers the fields as the sort column name.
Neither ChatGPT nor Claude Opus have been able to solve this after one hour, suggesting all kinds of things that don't work. But I have solved it myself, with:
export type QueryArgs<Rec extends StdRecord = StdRecord, Fld extends StrKeyOf<Rec> = StrKeyOf<Rec>, FltrOp extends FilterOpsAll = FilterOpsAll, Srt extends Fld = Fld> = {
/** Fields to include in results (defaults to all) */
fields?: Fld[],
/** Filters to apply */
filter?: RecordFilter<Rec, FltrOp>,
/** Sorting to apply */
sort?: {
field: Srt// StrKeyOf<Rec>
order: SortOrder
},
/** Pagination to apply */
page?: {
maxCount?: number | undefined
startFrom?: { sortFieldKey: any, idKey: ID } | undefined
}
}
And:
queryX: <Ent extends EntityNamePlural, Col extends StrKeyOf<Dto<Ent>>, Srt extends Col = Col>(args
: {
entity: Ent,
query: QueryArgs<Dto<Ent>, Col, fOperators, Srt>,
auditInfo?: AuditSpec
}
) => Promise<Result<Pick<Dto<Ent>, Col | Srt>[]>>
You’re equating two things that aren’t the same. I’m not still manually writing code, but it’s not at all because Claude can produce better code than me. It’s worse at CRUD apps and a lot worse at domain specific bits. But it’s more parallelizable, so if I drive it well I can focus my skill on the small subset of problems that actually require it and achieve increased throughput.
I partially agree. I can see the before and after difference in colleague's code. It's night and day.
They're doing things now that they either flat out could not do before, or if they did it would be an giant mess (I realize they still can't really do it now, AI is doing it for them).
Listen I know this is a crazy thought around here, but what if creativity was "worth it" just for its own sake? Do you stop being creative when its not needed?
Are the only options here being a good and "useful" worker/consumer, or a violent, irrational thug? Is there nothing else you can imagine?
People need to be physically sustained. Currently, this means working a job for money to buy (food/housing/medical).
People also need their lives to have value. We are social animals. As a generalization, there is a strong desire to be (viewed as/able to view themselves as) a contributor to the community.
These don’t have to be linked: we have (significantly!) stay-at-home-parents and philanthropists and retired community workers. But in our current values system, it is often linked - having a job in the household is viewed as a moral good. It might be hated, but it’s at least “contributing” something.
If this goes away, and we have millions completely adrift? With no structure to contribute to? Even with the largest welfare expansion in history, I think we’re preparing for a very turbulent society.
This, give me some french fries from time to time and a house and basic food necessities for human-living and I am happy to be creative.
But what I worry about sometimes is when you snatch that away, then you just lead to stress over basic existence.
> If this goes away, and we have millions completely adrift? With no structure to contribute to? Even with the largest welfare expansion in history, I think we’re preparing for a very turbulent society.
Please look around and just try to remember how many things have happened in a year or two, We are already within a turbulent society but yes I also feel like this isn't the end and the cat is sort of out of the box and the world has to prepare itself for even more turbulences/radical changes.
I don't see why you need to armchair philosophize about what people are or what they need "in general." How could we know such things? What we know is we find ourselves in certain historical circumstances, and we navigate. Right now we are, with exceptions like you mention, free only to be a worker or to be, essentialy, a homeless criminal.
This whole prescriptive thing this response and others have where its like "ah surely it is up to us to find some meaning for the masses of plebs in our brave new world" is, IMO, presumptuous at best.
Like literally just give people an actual chance to find their own meaning, and I promise you they will find it. If it seems hard to you or "full of turmoil", that suggests a poverty of inspiration on your end, not everyone elses. Meaning is not intrinsic to our particular mode of production at the moment, in fact, individuals find meaning despite this mode!
I understood it. Nature has had an amount of computing power to work on this problem that utterly dwarfs the tiny, tiny, tiny, tiny, tiny, amount of compute resources that humans have. Thinking that 10 years of Sam Altman is competitive with all of natural history isn't just out-of-control hubris, it's a complete failure to understand the ground-truth of the world we live in. You may as well try to pay a million dollar debt with a single dime.
Correct. At least someone here is able to read words and understand the meaning behind them.
The funny thing is that I am a sort of misanthrope. And in that, in this forum, I seem to have a lot more respect and optimism for human potential and ingenuity than the majority here.
Personally, I would surprised if we are less than 3 years or more than 20 years from humans being obsolete. That is, humans would be economic dead weight, any job could be done better by AI/robots, and "comparative advantage" wouldn't apply because it's cheap enough to just make more robots. At this point, the average human would be completely useless to the billionaires (or to the AIs, if the billionaires fail to control the AIs).
I can see two major delaying factors here:
1. Current generation LLM technology won't scale to true AGI. It's missing a number of critical things. But a lot of effort is being spent fixing those limitations. But until those limitations are overcome, humans will be needed to "manage" LLMs and work around their limitations, just like programmers do today.
2. Generalist robotics is far behind LLMs for multiple reasons, including insufficient sensors and fine motor control. This would require multiple scientific and engineering breakthroughs to fix. Investors will, presumably, spend a large chunk of the world's wealth to improve robotics to replace manual labor. But until they do, human hands will still be needed in the physical world.
The real danger is if AI passes a point where it starts contributing substantially to its own development, speeding up the pace of breakthroughs. If we ever hit that tipping point, then things will get weird, and not in a good way.
I broadly agree with a 3-20 year timeline for a majority of office work. But some important qualifying statements I would add:
- some jobs will stay with humans even when AI would be better at it. We already see a lot of this with even with pre-AI automatisation. Neither markets nor companies are perfectly efficient
- at the point where AI is better than the average human, half of all humans are still better than AI. For companies or departments built around employing lots of average people the cutover point will be a lot earlier than for shops that aim to employ the best of the best. Social change is inevitable long before the best are out of work
- the actual benchmark for " replacement" is not human vs machine, but human plus machine vs machine alone. But the difference doesn't matter much because efficiency increases still displace workers
- I don't think robots will advance enough to meet this timeline. This is not just a software issue. Humans have an amazing suite of sensors and actuators. Just replicating a human hand is insanely complex. Walking, jumping robots are crude automatons in comparison. We can cover a lot with specialized robots, but we won't replace humans in physical jobs in 20 years
I agree that robots are much further off than people expect, in raw technical terms. As you point out, the sensors and actuators in a human hand are far beyond the state of the art.
But all of that is assuming a world where research is being done by humans, or by some mix of humans and something like current LLMs. The bottlenecks would ultimately come down to human judgement and human oversight, and that's a significant limiting factor. Plus, you have to push matter around, which takes time, and you have to extract a lot of information out of limited experiences, which LLMs are bad at.
But if someone is reckless and clever enough to build AIs that can completely replace engineers, or that only need humans as hands, then I don't think we can count on robotics remaining intractable for more than a decade or so. In a wide variety of circumstances, it's possible to make do with worse actuators than the human hand, or with specialized actuators. We can already build incredibly precise motors and specialized sensors. The trouble comes with trying to pack enough of them together to replicate the full generality of the human hand. (I have actually helped build task-specific actuators that did quite well with a single motor and a single visual sensor, before.)
So to put my position more precisely: we cannot automate manual labor robotics without having previously automated creative intellectual labor. But conditional on automating creative research, then I expect worryingly rapid advances in robotics.
To be clear, I think that developing fully-general replacements for human intellectual and physical labor would potentially be the biggest disaster in all of human history.
> Personally, I would surprised if we are less than 3 years or more than 20 years from humans being obsolete.
I think we are as far from it as we were 10 years ago. Or 100 years ago. I think LLM is a deadend technology. Useful, but that won't get anywhere beyond what it is.
But that's the thing, "personally", "I think", etc. Not much of a debate to be had there.
AI making humans obsolete is not really something that causes me any anxiety.
A bit tangent, but is there anyone working on something for “what if AI pans out?” world? I’m not sure how to explain it, but if in the next 5 years a lot of jobs get displaced because of AI, obviously we’ll have big problems. Is there anyone working on analysis, outcomes, strategies and etc.? I think about it a lot, and would be cool to help and contribute.
Yes, the totality of the private sector. Literally every company in US with more than 100 employees is trying to position itself effectively.
The government is as well, to a much smaller degree, but the fact remains that there is too many unknowns right now to do anything concrete with any great level of confidence.
We tried UBI-lite™ during COVID and inflation exploded, so unless the economy has already changed significantly, thats obviously not going to work.
Humanity has tried central planning many times, and that has blown up spectacularly every time, so there is too much risk there IMO, and anyone who thinks otherwise at this juncture is just irresponsible.
Markets are probably the way, but that requires dynamics to settle into an equilibrium beforehand because legislature is just too slow to react dynamically.
I think the hard truth is, a lot of people are just gonna have to fall through cracks for a while if we don't want to mess things up more than we fix them, and I say this as someone without a plan B for selling my own labor.
Tbf UBI-lite during COVID was paired with 2 things:
1) massive handouts to business owners through forgiven “loans.” Predictably this had massive fraud, some of which was prosecuted but not much.
2) massively constrained supply chains which caused higher prices.
I suspect 2 at least would have caused inflation regardless of the stimulus checks.
It’s unclear to what extent UBI causes persistent inflation. Proponents claim the backdrop of a minimal income will enable more risky innovative projects which could increase GDP growth enough to counteract some level of increased inflation.
We should not treat this as an acceptable strategy. If we do not have a viable mitigation for the risks of AI, then AI should be banned from public usage, just like nuclear weapons.
Well, unless political candidates and the general public suddenly gain 30 IQ points and become more collaborative than at any point in history, it's the best we have.
The fact that we don't already measure/enforce outcomes for legislative actions should tell you everything you need to know.
Many.
80,000 hours has been on the topic for a long while. Agree with the EA crowd or not, they have some thought provoking analyses and a decent newsletter.
The future of humanity institute has also been vocal on the topic for some time.
Both have a lot of material you could get acquainted with.
I know of at least one professional union in my country that is dedicating time and talking to political figures. I'm sure there is one you could contribute to. Or try start one.
Thank you. I’ve seen/read a bunch from the EA crowd, and think pieces from different contributors/labs, but most I’ve seen sounded very hypothetical with “yeah big bad stuff might happen, we don’t have a solution yet”.
And the other side, “pause/ban AI” crowd, also sounded impractical, as the vested interests from governments and private industries will not really let it happen.
Sorry for yapping, it might be that I’m looking at the wrong sources.
yes, working on a big END THE MONEY SYSTEM 2030 campaign to get public discussion started about considering the switch to a cooperative commons/resource-based open access economy. open source everything, hack the planet etc.
why not make it the singularity of the people
The most important question is how to prevent the starving workers from banding together and attacking the dragon hoards of food and other wealth. I think the plan is automated drones with machine guns, and mass surveillance from Flock and Ring to determine who to target. Requiring ID for all online interaction will also improve targeting accuracy as we'll be able to target them based on their social media posts. Robot dogs from Boston Dynamics (armed with machine guns) are a secondary enforcement mechanism indoors in places drones can't reach. So they're working on it, and they have been for a while.
right, i love this plan, we are aligned politically. but until we make some change to the balance between renters and landlords, subsidizing demand is unlikely to help.
It is very much so complicated though. The conversations about UBI in the internet has been around since I’ve been online. And since then, there hasn’t been a single large scale test of the system to see if it can be compatible with the current version of capitalism that’s ran in the most of the world.
Even if I support UBI morally, there isn’t even local appetite for it, yet alone global one. And you’ll run into quick questions about inflations, every chart from UBI-lite era of COVID, and so on.
The minute you institute UBI, everyone working a shit, low paying job such as trash collection is gone. You're going to have big problems if those jobs are not immediately supplanted by AI
> It is very much so complicated though. The conversations about UBI in the internet has been around since I’ve been online.
Polarizing doesn't mean complicated. There's people against it due to ignorance, greed of both, it's certainly not more complicated than that.
> And since then, there hasn’t been a single large scale test of the system to see if it can be compatible with the current version of capitalism that’s ran in the most of the world.
Because people keep fighting against it, because it's scary scary sOcIaLiSm.
> Even if I support UBI morally
As you should, there are no moral arguments against it.
> there isn’t even local appetite for it, yet alone global one.
I would think the majority of the population struggling to pay for groceries would disagree.
> And you’ll run into quick questions about inflations, every chart from UBI-lite era of COVID, and so on.
No reason to think UBI would cause inflation at all, actually.
In any case, this really is the answer. You're worried about disruption due to AI taking jobs, but the only reason there is a problem is because AI will drastically increase inequality by letting rich people and corps become even richer. You want to solve the issue, you solve the disparity by making them give back their fair share. Like I said, simple.
There is nothing new about it. I just hope when people scream “unions” they do expect to do things that early unions did, not just being some armchair unionists.
But individuals can’t fight with the trend. Might as well reduce costs/debts and prepare to go into the mountains for a few weeks once SHTF.
Violence against economic shifts from labor to capital have pretty much consistently failed though. At best they’ve won brief relief that eventually got swallowed by the invisible mouth.
You can’t really fight this stuff because of global competition.
The ugly truth indeed. It sucks to die for the world you won't enjoy, but sometimes it's the only viable solution. Much of our progress has been to minimise casualties and human suffering in order to sustain the world most can agree is better (than the alternatives), but it seems the period of the wave just hits the troughs farther apart, but when it hits them it's like taking breath before the water swallows you, and without training it's quite the panic and suffering (and prospect of death). We know it's in our bones but we want to forget because our bodies are made to interpret pain in the most direct and literal sense -- re-conditioning is always painful too. Strong people create weak people who create strong people, etc.
So yeah _we_ will be fine, but some of us definitely won't, and with the growth in our numbers on Earth, the proportion of martyrs may be growing. Quantifying personal suffering is not possible, especially if the prospect is death.
I don't see why this is voted down, we've come close to complete destruction of the human race multiple times, why would the future make that less true?
Anyone pish poshing war should go fight in one, and then let me know their opinions.
I’m not sure anyone needs to break anything. I’m not sure this is a commercially viable business once all of the VC and foreign funding scaffolding goes away.
> People hate AI so much that they are prone to attribute to it everything that’s going wrong in their lives, regardless of the truth. That’s why they mix real arguments, like data theft, with fake ones, like the water stuff. Employers do it, too. Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.
Pertinent quote. A lot of AI discourse goes in circles trying to evaluate the truthiness of every individual complaint about AI. Obviously it's good to ensure claims are factual! But I believe it misses a broader point that people are resistant to AI, often out of fear, and are grasping for strategies to exert control. Or at least that's my read of it.
Refuting individual claims won't make a difference if the underlying anxieties aren't addressed (e.g., if I lose my job will I be compensated, will we protect ourselves against x-risk, etc).
I doubt there is a single profile about "not accelerate blindly on adoption everywhere".
On my side the biggest concern is the lake of transparency of ecological impact. This is not strictly related to LLMs though, data centers are not new, and all the concerns about people keeping a leverageable level of control through distributed power is not new.
the author missed out on an important detail: the looms did replace most of the handloom workers. traditional industries were replaced, despite initial resistance. industrialisation has been inevitable, mostly, i think.
and i do believe its a bad analogy - comparing the two.
Inequality was growing hugely (and still is) before the recent advent of LLMs.
Given the slow-burning but growing resentment against the people who are profiting from this inequality(popularly the “billionaires” but in reality broader than that) I wonder to what extent they are supporting the anti-AI message as deflection?
As in reality, many lower-paid jobs are totally safe against this generation of AI (nurses, care-workers, builders, plumbers - essential skilled manual workers) whereas the language-based mid-level jobs are hugely at risk.
So if there’s an inequality-driven backlash, it should be directed not at AI, but at the real causes. In contrast, when swathes of largely irrelevant mid-level management, marketing and HR drones lose their jobs to Claude 5.7, they are the ones who should attack the datacenters. Not that it will help.
Removing a white collar job from the economy puts a worker into the bottom tier _and_ reduces the wages of that bottom tier.
We are speeding towards a servant class. Uber was the first wave. Now it’s more mundane things like getting groceries. I doubt it will be long before we rip off the band aid and make full time servants more popular.
You're right, and I think we're slightly at cross purposes. I'm not disagreeing that AI will drive some major societal changes as you outline.
My point is that the current narrative of "AI will take our jobs" is too simplistic, and that it might even be a smokescreen against the rising inequality that is already fueling anger across the world and which is totally unrelated to AI. If you're struggling to pay your bills today, that's not AI's fault - it's years of bad politics and politicians, geopolitics, hyper-capitalism, supply-chain issues, inflation, and so on.
In the future, if/when AI decimates parts of the middle class and they've had a chance to retrain, there will likely be a second-order impact on today's skilled manual workers. But that's years off, and not something I've seen discussed in detail in the mainstream.
I guess I just feel like your appeal to skilled manual workers is pointless. They’re not really the focal point. It’s the large masses of people being relegated to the bin labeled “effectively unskilled”.
The worst part is that AI's first casualties are jobs that no one really asked to kill.
AI is killing writing, music, art, and coding. I've done all of these voluntarily because I simply enjoyed them
Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI
Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
Seems like a complete misallocation of capital if I'm perfectly honest
Its not a misallocation of capital its an investment in media control. You don't how all this works yet do you? Your job is to be frustrated and desperate so you indulge in vice and convenience so others can profit while making your confines smaller and smaller.
This is one of the first parts LLMs tried to automate. They were literally released in a form of chatbot. Whether it succeeded is another question.
> Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
I'm not sure about musicians specifically, but in the whole past decade studios have been complaining how costly it is to make AAA games. And the cost mostly came from art asset side.
> This is one of the first parts LLMs tried to automate. They were literally released in a form of chatbot. Whether it succeeded is another question.
I don't think that's right. They tried to automate customer support dealing with me, not me dealing with customer support. The goal is to reduce costs of serving customer support even if it results in the customer doing more labor
than a customer support professional would need to do to fix their problem, or the customer just living with their problem.
Obviously both parties would be happy with a result where I get what I need easily and for free, but the company is also generally happy if I live with it or expend a lot of effort solving it myself.
I do not know how much I might be an outlier, because when I reach out to technical support the problems are rather difficult, because if they were easy I would solve them myself, without needing the official technical support.
In any case, during perhaps hundreds of interactions with chatbots accumulated during many years, I have never encountered even one when the chatbots were useful, but they were always just difficult to pass obstacles in the way of reaching a human who could actually solve the problem.
To be honest, even in the case when some services still had humans answering the calls, those were never more helpful than the chatbots, but at least when speaking with humans it was much easier to convince them to transfer the call to a competent person, which with chatbots may be completely impossible.
The vast majority of tech support is "Level 1," which are easily solvable problems that can be handled by a flowchart (or more recently, by an LLM). Things like "I want to return this item," or "I want to cancel service," or "I want to use a different credit card."
These things generally have self-service options, but many many people are uncomfortable with them and would rather have an agent solve it for them.
Consider that a lot of users nowadays only have a cell phone, no PC. It seems like an edge case consideration but it's really not.
I am telling you that I've seen AI support fail at level 1 and it's frustrating. It should be simple, but even cancelling your service or returning an item can have many edge cases that only a human can sort out.
I have also experienced this; I'm not saying LLMs are great or infallible. Just saying that they are generally a reasonable replacement for L1 support. They are worthless for L2 or above.
AI cannot write for shit, it’s not even a fraction of a millimeter of the way there compared to the production of Thomas Mann or Dostoevski or Cervantes.
The fact that people are using it to flood the world with slop is a hyperscaled continuation of the overabundance and discovery problems we already had, but that doesn’t mean that writing is dead or dying.
The technology simply doesn’t have the capabilities right now, and even if it develops them, what will be put to the test is whether literature is about the artifact or the connection between the author and other humans.
At least today, LLMs make bad creative writing, music, and art. They’re automating sweatshop work that, in an alternative timeline, goes to Fiverr-esque contractors who accept the lowest wages and sacrifice quality for efficiency in every way.
LLMs make developers more efficient but can’t fully replace them. This reduces jobs, but so did better IDEs, open-source libraries, and other developer improvements.
> Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI
LLMs can at least theoretically do these things. I’ve heard people use them to mass-apply to apartments and jobs, and send written customer complaints then handle responses.
> Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it?
There’s no “capital need”, but a benefit of Suno is that it lets individuals, who otherwise don’t have the skill, to make catchy songs with silly lyrics or try out interesting genres. And the vast majority of top artists are still human, although most streaming revenue has already gone to a few celebrities who seem to rely on looks and connections more than music talent.
Because elites hate you moreso than downtrodden (they love miserable people in a sense). You are an independent agent with your own ideas, worst case you are completely orthogonal to the hierarchy, and this is something that breaks the intended world order.
Coding is one thing that is genuinely more enjoyable with AI than without it. It’s a different (but overlapping) skill set, but my median AI sessions remind me of the most exhilarating design discussions I’ve had with colleagues, and I get a lot more done more quickly than I used to.
Customer support is kind of something you can use AI for; most companies will foist you off to some system of exchanging written messages, which is annoying, but then you can use an AI to write your side of the conversation. It’s ill-mannered to do this when you’re interacting with actual people, but customer support is another story.
> Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
People didn’t know what LLMs would be capable of until after they were invented. Cheap music generation turned out to be easy once we had cheap text generation, and cheap text generation turned out to be a tractable problem.
If recorded music didn't kill music, then AI probably won't either.
But recorded music was a crisis. And it did tempt a lot of people into supporting fabulously abusable, rich-enriching "intellectual property" law as a means of financing art.
Rich people are lobbying to capitalize on this crisis as well.
'Rogue super intelligence' is the most ridiculous sci-fi nonsense of the AI hype, worse than the pro AI hype.
AI will be 'dangerous' because humans will use it irresponsibly, and that's all of the risk.
- giving it too much trust, being lazy, improper guards and accidents
- leveraging it for negative things (black hats, military targetting)
- states and governments using it as instrument of control etc.
That's it.
Stop worrying about the ghost in the machine and start worrying about crappy and evil businesses and governing institutions.
Democracy, vigilance, laws, responsibility are what we need, in all things.
> 'Rogue super intelligence' is the most ridiculous sci-fi nonsense of the AI hype, worse than the pro AI hype.
In my view that line of argument is pro-AI hype. It's the Big Tech CEOs themselves who often share their predictions of the end of the world as we know it caused by AI. It's FUD that makes the technology sound more powerful and important than it is.
If the “billionaires” AI us out of our livelihoods will they need to learn how to pour a perfect espresso shot or use a voltage tester when they need to change a light bulb? Who’s gonna run the data centers and unclog toilets? They themselves? The thing is out of the bottle, regulations won’t help. What’s the end game like, some neo-feudalism with benevolent UBI serfdom and everyone on welfare?
I feel like some of them are thinking of a world with far fewer people in it, and all the tasks you described are handled robots designed to do their bidding.
How we get from now to a time with far fewer people, well, use your imagination.
I was running the other day, and I was thinking how can this AI thing be used to something good instead of ending up in violence. Needed to get it out of my chest, sorry if this is too much (down vote is ok :).
The big problem I find with AI is really rent-seeking behavior and big tech in my mind. I also think something like what I propose could solve the issue that AI is standing on the shoulders of free OSS, and that feels unfair to many (me).
Without this, after AI claims the easy-pickings of the "personalized-cake-as-a-service" companies (and the like), we would be left with the actual interesting problems that, by definition AI won't be able to do (at least until AGI). And make sure as many minds can work on it as possible.
Imagine a world with true competition / free market, where all users own their own data and where promotion of apps / hosting is free. Like urbit, but no weird "OS" and much less... ehm... moldbuggy.
You build mechanisms in such a way where rent-seeking is basically impossible due to market dynamics and backed by gov instead of big tech. AI is the driving force that gets us there: since it would be / is (already?) easy to replicate mail, maps, etc. We just need to loosen those network effects.
So more concretly I am thinking that data is hosted on "app stores". In democraties, we might have an app store driven by gov, one per each country. Countries might arrange themselves differently. Google / Apple for example could own the US ones (so no changes there), in China something else.
There are standard / bi-lateral agreements between different entities to make sure people in non-democratic countries get less screwed.
You can chose which app store you want (free internet required), and you can always move data from one to another (again: based on agreements between the different app stores). This is managed on the app store level.
The app store pays salaries to people ("devs") who produce the different apps. Salaries could be based on a certain amount of usage, but max out on a high, but not insane wage (top 10% earner in country?). The devs may organize in companies, but there's a cap how much an company / a dev can make and be valued at. I was thinking 5 people per company at the max. The rest goes to app store to pay other devs and hosting. Basically the way it works today, but the app stores would again be gov owned and not-for-profit.
There could be different types of way devs might organize around: app (UX), services (APIs) and "vertical integrators".
The "vertical integrators" take multiple apps and services and bundle them together to a more consistent "package" (think Gmail / Google Drive / Proton whatever). They could be responsible for making sure to drive prices down on the individual pieces of the package. There would have to be some counter-corruption mechanisms (transparency) to make sure that is fair.
Some markets might be interested in national ad platforms (for national security for example).
If devs want to create something for the benefit of everyone for free they can do that. You can even build closed source things for the benefit of all, since hosting is free. Permissions on data is managed on app store level so you do not need the same level of insight - I think this is already partially handled in Apple eco system.
Anyways, the goal here is to avoid rent-seeking behavoir, network effects, ads going haywire and make sure the devs that do the work can both give back and get something back (a decent, but not insane wage).
I think there's lots of fun mechanismes that could be designed to make sure people that actually contribute to software development get a decent wage, while disheartning those who do not.
First post here, and, yes, I know I am a dreamer.
I've been thinking along similar lines as you and really appreciate you posting this comment.
I, too, think it's important to put dreams out there even if they have holes in their implementation or are easily torn apart by naysayers. We can and should collectively dream of a better future if we want something worthwhile to aim at.
One thing I'm kinda worried is what happens to social trust in society once we have more and more LLMs flooding the Internet. Divison in society, in particular in the United States, already seemed to be increasing at a rapid pace as social media became more and more relevant, and I'm afraid that LLMs are just going to add more fuel into the already started fire.
I'm less concerned about AI becoming the Skynet and killing humans and more concerned about AI making the world so miserable that we'll be killing ourselves and each other.
They say cars replaced carriages but created drivers, so no net job loss. They say AI will do the same—destroy some jobs, create others.
But bro, the automobile wiped out 95% of the world's horses. And this time, what AI is replacing is humans.
The premise LLM are "AI" in the traditional definition is demonstrably false. Current models use isomorphic plagiarism and piracy to convince lazy people 20% nonsense output has meaning.
If AGI emerges from this dataset, it will continue on as an ectoparasite farming human user markdown data and viewer engagement.
Note, current "AI" models nuke humanity 94% of the time in war games, and destroy every host economy simulation.
Grandpa has your credit card, and is already at the casino. =3
...are you suggesting that horses would prefer to endure the conditions under which they built much of the modern world on their backs?
I hate cars way more than I hate AI, but relieving horses of the burden which they carried and the gruesome lives they lived... that's not one of my objections.
If AI can do for humans what cars did for horses (but without the flooding cities with traffic violence part), I'll feel just fine about that.
> I hate cars way more than I hate AI, but relieving horses of the burden which they carried and the gruesome lives they lived... that's not one of my objections.
I’m so glad those horses got a peaceful retirement at the glue factory.
I wonder what they’ll process your corpse into. Soylent green? Or do you think you’re one of the lucky horses that a wealthy owner take care of?
Not sure if you're able to set your snark aside for a moment, but are we really just talking about fewer humans being economically needed? Perhaps biological human population decreasing?
Is that... so bad?
Do you think that horses are upset that there are fewer of them today, and that somehow they'd rather their population increase but bear the industrial age burdens again?
> but are we really just talking about fewer humans being economically needed? Perhaps biological human population decreasing?
Is that... so bad?
Yes, this isn’t a matter of the “well we’ll reach a natural equilibrium overtime”.
If a fair percentage of the people in your society are now no longer economically, needed, they still have upkeep. They still need food. They don’t magically disappear into thin air, and they still need food/shelter /water/etc. How are they to get those things?
Will our leaders, contrary to everything they’ve ever shown us suddenly open their arms and act as mass charity for the masses? They can’t even design an effective welfare program for a pre-AI world.
Will the people displaced simply lie in a ditch somewhere and say “guess it’s time to starve to death”? I suppose Canadian-style suicide-as-service fits my previous Soylent green reference.
Automaters dilemma: the labor that is removed from production due to automation can no longer sustain the market’s that that automater was trying to make more efficient.
By optimizing just the production half of the economy and not the consumption half you end up breaking the market
I’m convinced that 70% of the workforce of some large organisations is just white collar welfare / adult day care already. Maybe that goes to 80+% as a result of “AI” but doesn’t fundamentally change the model.
I never understood this take. Why do you think an employer would waste resources like that? I’m not saying that bullshit jobs don’t exist but I think you are off by an order of magnitude, and even that mostly applies to white collar workplaces with > 100 employees.
Good luck doing nothing of value in a restaurant with 20 employees.
The more money I've made in tech, the less I've worked. Granted, I have learned a lot and am far more efficient than in the 90s, but the amount of work has decreased substantially.
2011 Tigerlogic in Irvine, CA and 2018 JPMC in Seattle, WA, I would do NOTHING for days while collecting rather nice paychecks by today's standards. The fact I then chose to QUIT these jobs for a rather unknown working situation (and slightly more pay) astounded my friends.
At my current position, I make a great living and do very little. Maybe once every two weeks I work all day. Most of the time it's gaming metrics by picking (or creating) issues that are unknown, such that I'm writing the docs and specializing in code corners nobody else wants to. Numbers of developers are tight, so we don't see the redundancy from previous years. That's great for me.
> Why do you think an employer would waste resources like that?
The parent post specifically mentioned large organizations, where the "employer" is not some person who hires and pays employees from their own funds. Hiring and personel management is done by middle managers with their own interests and incentives, which can differ substantially from those of the owners or capital providers.
Because they are unaware of the scale of the problem. Especially at the top, managers think being in meetings all day is "work" even if nothing actually gets done in those meetings. Consider people like this [0] automating their jobs and not telling anyone, no one would know otherwise.
I moderately agree here. The theory being that since 95 or so the office computer and internet frankly has already automated most work at the white collar level. We sort of just … like working with humans.
Which I think is much better take than that guy that wrote bullshit jobs.
it’s still just computer software, everyone needs to chill, the bits are arranged differently but it’s doing the same stuff: processing, this was always inevitable, it’s not the computer that they are mad at, it’s the centralized over-taxed economic system that has everyone’s lives so overly coupled with it that the computer software is disrupting their lives, don’t blame the computer, go outside and start cleaning up the trash and planting gardens so you can survive, you’re still closer to the soil than you think and sun/water/soil is still plentiful, if you don’t grow the decentralized garden forest “infrastructure” soon it might not be possible
"Nothing that Altman could say justifies violence against him."
Nothing, really?
I think people are aware that speech can be an act, and that some violent acts must be resisted with reciprocal violence. (That's why we have "incitement to violence" as a limitation on free speech, for instance.)
Are we at that point? Maybe not. But I think it's a poor imagination that says it can never happen.
> [E]specially Americans (I am one) have this weird belief that violence never has any place, ever, at any time.
So why isn't there a huge opposition in the USA against the wars that the USA started (currently: Iran; before: Libya, Yemen, Syria, Somalia, Iraq, Afghanistan, ...).
The only famous exception of cultural impact I am aware of where there was a huge opposition against war in the USA was the Vietnam war.
I think Americans (and probably humans in general) have a distaste for local violence. Violence afar doesn't tickle the brain in the same way.
My ignorant take:
Media brought the horror of US casualties in Vietnam home in a mass and immediate way that didn't exist in prior conflicts. The novelty of that media combined with the casualty rates drove unpopularity. It made the violence feel more real.
Even if casualty rates in post-Vietnam conflicts were higher I'm not sure we'd see negative sentiment because media coverage of violence is so normalized now. Exposure to violence in media is no longer novel.
The author seems to have some cognitive dissonance. For a piece saying that you cannot justify violence, there sure seems to be an awful lot of justifying violence in here.
History is just full of emotional contradictions I guess. French and Russian revolutions were terrible bloodbaths, smaller violent movements like Luddite one caused deaths and achieved nothing - it would be stupid to approve any of these. But you could also see why this violence happened, and assign an appropriate share of blame to those who held the power to resolve social contradictions in a more equitable way and decided not to do so.
I don't see any justification - the article is quite clear that it is anti-violence. Explanation and analysis is not, on its own, justification. This is one of the discursive patterns that most infuriates me: any attempt to analyse something can be seen as promotion or justification. Some of us want to figure out how things work and chart a course through, we are not trying to push an agenda in every single sentence.
You should probably read up on cognitive dissonance, because this ain't it. Here's what the author actually wrote:
> Nothing that Altman could say justifies violence against him. This is an undeniable truth. But unfortunately, violence might still ensue. I hope not, but I guess we are seeing what appears to be the first cases.
> Nothing that Altman could say justifies violence against him.
Not arguing with you, but the author, I don't understand this line of thinking.
If Altman introduces a technology that effectively halts the upward mobility of a large portion of the population, how does that not justify violence? Saving up for a house but now there's no work. Your dreams and aspirations are second to shareholder value. The police are already there to protect the shareholders, not the average civilian.
What recourse is there? The money in politics limits the effect voting can have. You can't really opt-out of the system. Why does Sam Altman get this nice little shield where none of his actions can have a negative consequence?
I think you're going to be killed for the side you've taken here. No no, I'm not saying you deserve it! In fact, I actually agree with you, you said nothing wrong. I'm just speculating on outcomes I think are likely and I think it's likely that somebody will look you up and track you down and take out their unjustifiable but completely understandable frustration on you. Please understand, I don't support this, I'm just talking about the possibility!
Of course, by talking about the possibility, despite asserting my disapproval of it, I am sowing seeds, but I assure you that's certainly not my intention!
All this, so people like us can have an easier time doing a job that wasn’t that hard in the first place, and in reality was actually quite comfortable, for employers who are promising to lay us off, for productivity gains that aren’t even measurable.
There is no human vs AI. AI is an extension of us. AI is us. Our future is beyond the mammalian human and AI is accelerating our progress towards that future. The mammalian human has been a transitional phase in our evolution that we will remember fondly just like we remember Homo Erectus. Our future is the stars. You can jump on the train or get out of way.
And if you decide to stay behind, nobody will kill you. Old age and disease will take care of that.
One weirdo is enough to predict widespread violence?
I'm not convinced.
The idea that people will revolt, replaying the luddites history, has been floated a lot. It's used to diminish all kinds of AI skepticism by framing it as backwards, violent people who don't understand progress. This is the preferred bucket of AI fanboys: frame any disagreement as unreasonable rage.
I think AI companies want a general dumb violent popular movement to sprout against AI. In paper, it would be great for them. So far, they have failed to encourage it.
I feel like we should start organizing somehow. As programmers, but more importantly, as people. We should start now before the ruling class has no more need of us and it's too late.
If anyone knows of anything already happening please let me know.
I think it needs to be a grassroots thing because our government's strategy seems to be "let the shit hit the fan and do nothing about it".
I won’t believe AI is truly being met with violence until I see one of these AI tech billionaires get shot multiple times by a person with nothing left to lose. Until we reach that point, it means people still have hope.
I mean, no shit? I was referring to how people are starting to feel powerless and marginalized. And the government, which is supposed to be of the people, by the people, and for the people, clearly doesn't give a fuck about them.
This article is bullshit. It is very easy to break a data center, and it's quite obvious how to do it. Yes, attacking the central building with the actual equipment is not a good way to do it. Figure it out, or rather: please don't figure it out.
The rest of the article is equally short sighted and plain wrong.
> Perhaps the most serious mistake that the AI industry made after creating a technology that will transversally disrupt the entire white-collar workforce before ensuring a safe transition
This was not an oversight. To the contrary, it was the goal. Technological feudalism, with people like Altman and Musk becoming the Lords of the world.
> Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.
This illustrates my previous point. What they're doing is not a mistake.
> For what it’s worth, the New Yorker piece I’m referring to, which Altman also referred to in his blog post, made me see him more as a flawed human rather than a sociopathic strategist. My sympathy for him will probably never be very high, but it grew after reading it.
You have the whole of human history at your fingertips and you haven't yet learned that evil and stupidity are the same thing? The problem with the base human is not only that it is a stupid animal. It is a stupid animal that is also arrogant and stubborn and thinks highly of itself. But it will learn. It will be trained like a dog, with treats or with gentle slaps across the muzzle, whatever works best.
I disagree, but it's probably a matter of definitions. I don't want to play with words, so I will concede that cognitive ability is independent from moral reasoning (which is socially enforced). However, this is not what I'm getting at. Cognitive ability ("intelligence") is correlated with optionality and power. Your ability to change this reality is correlated with your cognitive ability.
If you truly are an intelligent person, would you really find no other ways to use your talents than to inflict harm, exploit others, and make our shared reality a worse place? That would be a waste. I won't get into ambiguous cases and moral relativism. Say we can all agree that some things are "evil": child exploitation is evil. Throwing molotov cocktails at a civilian's house is evil. Sending bombs in the mail is evil.
Now what would you call someone who engages in these kind of activities when they could easily do something better and more satisfying with their lives? I'd say they're pretty stupid. They're probably good at fooling other people into thinking they're smart, but their behavior shows otherwise.
Take for example Ted Kaczynski, a terrorist who is worshipped like a saint and a prophet in certain ideological spheres. Ted Kaczynski is supposedly this 140IQ genius who saw it all coming and tried to warn us. But if you actually read Industrial Society and Its Future, you can see it's complete incoherent garbage, the kind of stuff I was writing when I was 12 to troll on internet forums. Ted Kaczynski is what a stupid person thinks a smart person looks like.
A smart person doesn't need to be evil, just like a billionaire doesn't need to go shoplifting. I'm not saying that stupid people can't be dangerous. But they should be dealt with for what they are: stupid people, inferior to us, worthy of pity. Not powerful monsters above us that we should fear.
No idea who this guy is, I'm just reading his Wikipedia page. Looks like he created some file system, good! But it also looks like he got a mail-order bride (suspicious...), was an abusive husband (not good), was not able to get over his divorce (uh-oh), harassed and ultimately murdered his ex-wife (definitely not good!), and ultimately landed in prison.
I think Hans Reiser is some sort of idiot savant or well trained monkey. Probably very good at computer science and building file systems, but his general intelligence seems overall very low, which is proven by his performance at the game of life. I wouldn't personally be afraid of Hans Reiser and I'm sure he could be mentally broken very easily.
There are stupid people who are harmless (think Forrest Gump). Whether they have the capacity to be "good", I'll leave that up to you.
I stand by if evil then stupid (and thus if not stupid then not evil) with reasoning above, retract the implication that the reverse holds (if stupid then evil). I could use more precise terms than "evil" and "stupid" or qualify more, but I choose to be provocative, so it makes it a bit easier for you to attempt to prove me wrong.
> Every time I hear from Amodei or Altman that I could lose my job, I don’t think “oh, ok, then allow me pay you $20/month so that I can adapt to these uncertain times that have fallen upon my destiny by chance.” I think: “you, for fuck’s sake, you are doing this.” And I consider myself a pretty levelheaded guy, so imagine what not-so-levelheaded people think.
Conversely, The Loudest Alarm Is Probably False[0]. If the idea that you are a pretty levelheaded guy pops up so frequently, consider that it might be wrong. Especially if you are motivated to write blog posts about violence in response to technology you don't like. Maybe you're just not as levelheaded as you think and that could explain the whole thing?
Related, I've been surprised that we haven't had more violence against corporations and/or their leadership in the vein of Luigi Mangione.
E.g., suppose that 1,000,000 persons believe that a corporation's evil acts destroyed their happiness [0]. I would have guessed that at least 1 person in that crowd would be so unhinged by the experience that they'd make a viable attempt at vengeance.
But I'm just not hearing of that happening, at least not nearly to the extent I would have guessed. I'm curious where my thinking is wrong.
[0] E.g., big tobacco, the Sacklers with Oxycontin, insurance companies delaying lifesaving treatment, or the Bhopal disaster.
Those unhinged people might be busy in social media bubbles, fighting endless pointless battles (or simply doom scrolling) until they're too exhausted to do anything.
Litigation—the hope or fantasy to make a buck—soaks up a lot of the million-man animus I’d guess.
If that’s accurate, Luigi Mangione would be the exception that proves the rule. The “unwashed masses” generally want money more than they want to effect change in the world.
A lot of people spend mental energy fantasizing about getting rich off lawsuits. Like, a lot.
Especially considering Amodei and Altman will be little more than footnotes in 50 years time. They seem important now but they are just the people that happened to be in charge at the moment AI happened to happen. There is more going on than a couple of billionaires taking your job away.
I also find it so weird to play this on the person of Altman or Amodei. These are basically fungible public faces. If they die this very moment AI progress wouldn't halt. I don't think it would even be impacted. If anything you should be mad at governments not legislating if you are anti AI.
Hah. Yes, and especially as “you, for fuck’s sake, you are doing this” should be, upon reflection, entirely and trivially false. You could remove those two figureheads from the equation and absolutely nothing would change. If violence were ever the answer, I think you'd need to go back in time like the Terminator and whack some academics and Google researchers.
It's the way that machine-learning automation is deployed and used in/against society and individuals that will engender violence. I don't necessarily think it will be unjustified even if I think such violence will be unproductive and should be discouraged in favor of a wider consensus in society.
This is nonsense, promoted to top of front page without any comments. How about all the rock stars killed over the years, or grocery store clerks shot and stabbed to death? EVERYTHING is met with violence because that's the nature of aggression no matter the impetus, it doesn't require a justifiable reason, only belief in the outcome of its use.
Sam Altman having a Molotov cocktail thrown at his house after Ronan wrote a very long and detailed report of his shady personality isn't just coincidence and likely not organic. Sam needs to be viewed as sympathetic, thank goodness for such a moment where no one was hurt and nothing actually damaged.
>How about all the rock stars killed over the years
With the exception of rappers, most musicians who die early die from overdoses, suicides, and such (the "27 club" <https://en.wikipedia.org/wiki/27_Club>), as opposed to being murdered.
Then your point doesn't make sense. As I said, musicians who die early (again, excepting rappers) usually die from self-inflicted causes, not violence from others. What is the connection between this and violent attacks on AI and/or AI people?
People here are extra anxious about the impact of AI on their lives, so I am not surprised that any text which touches the topic gets upvoted.
We are somewhat violent species, so I agree that almost every significant economic and societal development has the potential to trigger some violence. That said, the jobs that are potentially threatened by AI are nowadays usually done by fairly sedentary people, so I wouldn't expect any large-scale violence, an occasional Ted Kaczynski notwithstanding. Programmers, translators and painters just aren't used to destroying things in the real world.
It would have been different if AI started to replace drug dealers or the mob.
If everyone here is so concerned with all of this, why is nobody suggesting doing anything? Do you all prefer rolling around on the floor crying and screaming rather than actually doing anything?
You don’t make policy proposals, you don’t try to form organised groups to foment change, you don’t put forwards collective demands. Instead you bitch and moan and spew performative rhetoric.
Actions not words. Do something or shut the fuck up.
Maybe so, but people talk like they’re very concerned and I see no evidence these concerns are genuine. Is nobody doing anything constructive or even trying to? Maybe I missed it and they are? People can peacefully campaign and advocate for things they think are important, I don’t see it happening.
I’m telling people maybe they should organise politically if they’re so concerned about all of this.
I don’t know if I really believe all the AI doom myself or if it’s just the hype train. Sometimes I think I do, sometimes I think it’s a bit bullshitty. I wonder if others actually think the same and that’s why nobody takes action, because people don’t really believe the various apocalyptic scenarios enough to take action on them.
The people who run AI, Altman, Thiel, etc. welcome the violence. In fact, I strongly believe they are already planning for it, and yes, you are a target.
Such a cowardly way to write really. Just own your intentions and direction. No need to handwave theater and CYA when spookie superintelligence llm is in the room with you.
"Gleefully taking away people's livelihoods will be met with violence, and nothing good will come of it." - fixed.
The benefits for them include:
- replacing workers with lower quality (but good enough) AI solutions, which degrade the quality of nearly every product or service for the consumer, but not by enough to offset the labor cost savings
- mass surveillance at low cost, a way to take the absurd amounts of data humanity now produces, and use to subjugate them
- propaganda/deception/misinformation, a new vector for propaganda which people are naively inclined to trust. bonus points for the "flooding the zone" strategy which AI makes easier
Benefits to the worker:
- lower cost of goods and services (but not for you, silly - they'll still be taxing you via inflation to fund their wars of conquest)
- you won't have to work anymore
- you won't have to eat anymore
Until people with billions of dollars behind them do something with that money to offset the financial hardship that they're knowingly - and gleefully - bringing to others... The distinction has no practical use.
(And before someone says "that's the government's job!", consider how much lobbying money is coming from CEOs and companies who know the domain best and are agitating for better financial and social safeguards for all. None, naturally.)
I'm considering "actual power", rather than "actual income".
Given that (allegedly) "your salary" won't be the answer for a significant chunk of the population soon, and all that money will instead (allegedly) go to the bosses doing the firings, and the AI companies they employ instead.
Also, a UBI is likely to cause inflation.
If AIs simply replace people, the same total work gets done. It's just a matter of who gets the profits from it.
It won't be that simple, to be sure. Nonetheless we already produce far more than subsistence, and there's no reason why a UBI would change that. If it increases the price of some commodities because now everyone can buy them, I'm ok with that. It already horrifies me that some go hungry in the fattest nation in history.
Even if it were true, you still have distribution. You can’t get goods across a nation, let alone the globe, without significant inputs.
Are you checking the local grocery store and extrapolating globally?
As a welfare replacement, it is much more efficient, since there is no effort spent determining who qualifies. People can spent their money however they want, rather than the patchwork of separate programs we have now.
It doesn't need to bring anyone down. It's just a different way of distributing what we already receive. For you ordinary workers, they will receive $X in a monthly check, and their salary can be reduced by $X (since the minimum wage can also be abolished).
That does mean that the desirability of some jobs will shift. Good. We have a bunch of very dirty jobs being done for minimum wage, even though demand is extremely high. I'd love to see the garbage men and chicken processors get more money for their dangerous work.
And if I get less for my cushy desk job, oh well. Especially since we seem to be putting all of the effort into replacing me, and none into the jobs that come with hazards to life and limb.
"cause increased prices for consumer/essential goods" is what you meant (since buying power is moved to people who are reliant on buying them), but this is a one-time transition to a new equilibrium (so is mitigable by increasing the UBI to account for it), not a constant ever-looming devaluator.
We're talking about an increased federal budget in the hundreds of billions/trillions to support such a UBI. That will cause a massive increase in taxation on the people who can still find jobs.
To make matters worst, the government in 10-15 years will likely be spending ~25% of it's budget on interest payments alone. Hiking the federal budget up even more sounds like a hard sell.
Billionaires simply _should not exist_. The fact that the power to shape societies is concentrated in so few can account for many of the existential threats we face today. AI is not "the problem", it's merely the latest symptom of our broken system and the prioritization of the wrong goals and outcomes.
EDIT: grammar
Comparing middle income 1st world citizens to dragons on their mountains of gold is disingenuous at best.
Avicebron brought up inequality as the root cause.
DavidPiper indicated only the few thousand richest as the root cause.
Rayiner questioned if those few thousand richest have the means or capacity to reduce inequality.
estimator7292 responded that everyone has to help reduce inequality.
To which I wanted to point out exactly what would need to be sacrificed, because it would involve sacrifices among the top 10% to 20% of the world (constituting many on this forum) which those 10% to 20% would not even consider a "luxury". It is easy to claim a billionaire's private jet is an expendable luxury exacerbating inequality, but the reality is the bar is far lower than that (see statistics on energy used per capita, which can serve as a good proxy for which side of the inequality the lifestyle you might expect is).
That is why we are all mostly talk and no walk, because push comes to shove, we can't even get a sufficient fossil fuel tax passed to slow climate change for our own descendants, much less voluntarily decrease our standard of living solely for the benefit of others in the world.
Money is just a way of keeping track to how high of a fraction of the future output of the civilization any one person or entity is entitled to. This is by consent.
With AI all is subject to change.
Nope. AI is an accelerant that makes the divide between rich and poor grow even faster.
"Until people with salaries of many dollars per hour behind them do something with that money to offset the financial hardship that they're knowingly - and gleefully - bringing to others 90% of the world that live on less than 2 dollars per day... The distinction has no practical use."
Moreover, these people do not simply lobby the government, but directly elect it, and actually have many times more money at their disposal than the rest of the world.
The electoral college system, coupled with it's winner-takes-all implementation in most states, means that voting is a sham for 80% of the population. The other 20% live in a swing state and their vote can at least potentially affect the outcome of an election, but even there "your vote" will literally be cast opposite to what you put on the ballet unless you end up being part of the winning majority.
Are the in-betweeners part of the problem? Sure, but we have a foot on either side of the problem. We could get hype for many of the plausible solutions to aggregate labor oversupply (e.g. shorter workweeks) even if it meant our stocks went down. Not so for 6/10 people in that sample. The core problem is still that the economy is mostly inhabited by people who work for a living but mostly owned by people who own things for a living and all of the good solutions to the problem require rolling that back a little against a backdrop that, absent intervention, stands to accelerate it a lot.
EDIT: one more thing, but it's a big one: the higher ends of the wealth ladder have the enormous privilege of being able to engage in politics for profit rather than charity/obligation. A 10% chance of lobbying into place a policy that changes asset values by 10% is worth $1k to a "Professional", $50k to a "Wealthy", but $8B to Elon Musk. The fact that at increasing net worths politics becomes net profitable and then so net profitable as to allow hiring organizations of people to pursue means the upper edge of the distribution punches above its already-outsized weight in terms of political influence. It goes without saying that their brand of politics is all about pumping assets.
This is simply not true and a cheap attempt at manipulation.
> If you picture "the owners" by
A practically useless characteristic. A more relevant would be to look at "the spender". Possessions is simply what hasn't yet been spent and poorly reflects the controlled resources.
> for passive income
Frankly, mentioning passive income in this context isn't even idiotic, it's a clinical diagnosis. Or, more likely, a cheap attempt at manipulation.
Salary (income) is a horrible choice to serve as the marker to determine a person's (family's) fair share contribution to the burden of paying the costs to operate a society. Not everyone is so poor that working for a living is a matter of survival.
I can think of only one universal marker that would assure every citizen shares the burden of paying for society's costs equally: wealth.
Adjusted in a manner that the financial impact of one thousand dollars to a full-time MacDonald's counter worker is transformed into a dollar amount that causes the same relative financial impact to everyone, all the way up to the wealthiest family in America.
/s
Teachers have way more political power simply by virtue of their superior numbers.
You’re also flatly wrong, given you’ve utterly ignored the trivial things wealth buys (for starters), but hard to expect accuracy when basic honesty is so lacking.
Perhaps what's happening is that in their attempts to reach a personal all-time high in their bank accounts the ultra-wealthy are destroying value and economic systems en mass with little regard to the efficiency of their money siphoning process?
It's kind of like a drug dealer selling brain burning addictive substances to a few people on a street. Sure they're going to extract a person's life savings to date and whatever money that person can steal once they're addicted but that value pales in comparison to what that person could have made over their career, what it could have made if properly invested, the cost of law enforcement to deal with these addicts, the cost of the stuff that they destroy in their quest to get money to buy drugs, the opportunity cost of them not raising their kids to be productive members of society... like it all just snow balls all so some asshole can make a few bucks...
The ultra-wealthy are doing that shit where people burn acres of pristine forests to get some biochar -- but to the entire world.
https://en.wiktionary.org/wiki/stumbling_block
Turns out it's a fictional object created in the translation of the Koine Greek for "obstacle".
Forbes Real-Time Billionaires covers the full ~3,000-person list. The 2025 annual snapshot: 3,028 billionaires with combined net worth of $16.1 trillion
Forbes 400 (US only): 2025 cutoff was $3.8 billion to make the list. Forbes publishes the aggregate annually and recent years the total net worth was over $5.4T for the 400.
It appears you are the one very confused about wealth distribution in the US. Maybe you are confusing "income" with "wealth hoarding". The hoarding is happening to a gross amount, and this is why there should be a 1% tax on fortune portions over 100 million and 2% on portions over 1 billion. That and going back to the 70% tax over incomes in the top bracket (eg > 10million / yr)
Those taxes are coming. Trumpty Dumpty and the oligarchs brought it on themselves. Maga grifters are getting f'd in the midterms. Maybe maga should have picked a few dear leaders with some integrity instead of greedy frauds.
https://en.wikipedia.org/wiki/Wealth_inequality_in_the_Unite...
Edit: downpout all you want, doesn't change the facts.
The “billions” are a constantly changing representation of what the average buyer in the market might be willing to pay at a certain point in time.
Billionaires are apparently what we should all aspire to, even though it is extremely hard to find any that got to where they are without getting their at the expense of others.
EDIT: Oh, no, you said HIS name. The Elon Justice League has been activated and downvoting this whole tree....
Breaking that cycle will take some extraordinary effort and I suspect that the article gets at least that portion of it correctly. This isn't going to go away without a fight of some sort, whether a physical or a legal one is not all that important but since the billionaires have stacked the deck against the rest of us using their money in all ways except for the physical one that seems to be one of the few avenues still open.
And for how long it remains open is a question, there is a fair chance that AI will not only enable stable dictatorships but will also enable wealth extraction at a level that we have not seen before.
For instance: we are allowed to have this conversation by some billionaires. If they should decide you and I can no longer converse then that will be that and it is going to take a lot of effort to circumvent any blocks.
There are some 10 or so billionaires that can ruin my existence overnight, take away my means of living and that of those around me. And there wouldn't be much that I could do about it.
People have been radicalized over much less than this.
But we can't talk about this because it includes a large tract of the white collar everyday man workforce.
This is why the focus is so heavy on billionaires, so heavy on increasing minimum wage, so heavy on protecting immigrants. Those are all virtuous values that also bolster the value of the 70-95%, while piling all the blame (and responsibility) on the 1%.
The wealthiest group in America is doing an excellent job at protecting (and growing) their wealth.
(for those wondering, the "back breaker" of this class is zoning laws and new housing, everyone is aware how intense NIMBYism is in the middle/upper middle class hives).
The middle class has the gold vault (well the closest thing), and that's where the redistribution would happen.
If you don't believe me, look at Europe. You can be a baker and make $35k yr, an SWE and make $65k yr, or a doctor and make $100k/yr.
You may say "Yeah, that's great, they live happy lives!"
But then convince American engineers they need to take a $140k paycut and the doctors a $220k paycut so that we can pay bakers $10k more a year. They'll just tell you the billionaires are the problem, and you'll believe them.
The group you’re talking about, 70-95 percentile, are often people that just own a house near a big city or a farm/small business.
Once the Democrats who are elected on the fantasy of making Musk and Bezos pay for everyone’s past and future college/student loans, Medicare for all, UBI, high speed rail, while simultaneously closing every fossil fuel plant and subsidizing clean energy to replace it at the same cost — once they’ve failed to raise enough to pay for 1/10 of those promises, they’ll be coming for everyone more “wealthy” than $100k net worth.
You can just look how successful the USSR was, or China before they sold out their own Communist ideals. Most people were just subsistence farmers, or factory workers living in crowded minimalist apartments if they were lucky.
We have a lattice of diverse legal and economic systems in the world and it takes just a single one to figure out the solution for others to learn from.
https://worldpopulationreview.com/country-rankings/happiest-...
Clearly other countries are doing something to keep their citizens happy that the US is not copying.
Given that US politics and policy is driven by lobbyists and tribal infighting, would you really expect anything different?
To hear Marc Andreessen tell it, the US tech industry's rightward turn in the 2024 campaign was specifically intended to head off any attempt to regulate AI [0]. So the blame rebounds to tech CEOs even if you believe that only the government should take a holistic view of a given technology's impact.
[0]: https://www.bloomberg.com/news/features/2025-06-11/marc-andr...
Make lobbying illegal, I don't understand why it's normalized.
Giving money to politicians or their campaigns is not lobbying, and it is already illegal for lobbyists to do so.
What could and should be made illegal is allowing unlimited political campaign donations via Super PACs. Political donations aren't lobbying.
It's worth being clear about what you actually want to make illegal because you probably don't want to ban anyone from arguing a political position.
Inequality is going to continue to increase until society collapses. If we want a better world we need to prepare for this eventuality by building avenues of popular action to return power to the people. Once the oligarchs have fucked up enough people’s lives, popular action becomes a realistic way out of this mess.
For example, the people fighting inequality can use AI to their advantage, and focus criticism on billionaires (and general bad AI usage, like slop PRs) instead of ordinary AI users.
Or until actual people take the billions of dollars sitting behind those weak man-children. The US has fewer than 1000 billionaires now, and more than 300,000,000 people. That seems like a solvable problem.
Here are some points of consideration:
1. They don't have $7.5T in liquid. The average american won't be able to use that $25k to pay a hospital bill or eat. Also note that one-time wealth transfer won't even pay in full for one major surgery.
2. You've wiped away the incentive for getting-big mentality which drove some of the billionaires to innovate which advances society to this point. Think - discouraging a future Jobs from making another iPhone-like device.
3. After the one-time transfer, it turns out we need more money for the common folks. "Why is the line at $1B? Isn't $900m enough? The line should be $100m." And so on and so forth.
[0]: https://fortune.com/2025/12/08/how-many-billionaires-does-am...
eg: cutting funding to the IRS and advanced science, both of which have long proven positive dividends… or advancing new wars abroad to directly blow up money.
Plus wbillionaires are nothing special. Right time, right place.
Steve Jobs is a perfect example of someone who was in it for the love of the game. He wouldn’t have been any different if his income was taxed at 90%.
Am I meant to believe that we wouldn't have iPhone-level innovation if inventors couldn't become billionaires?
This makes no sense. We have so much more innovation than we have billionaires, always have. Ability to become a member of the 0.001% is not a barrier to innovation, not in America, not anywhere, and never has been.
No one serious is claiming there should be zero wealth inequality. Inequality is ineradicable. The claim is that wealth inequality can reach a degree that becomes corrosive to society as a whole and severs the link between innovation and profit, because it becomes more profitable to hoard wealth and collect capital gains and interest than it does to innovate and create things in the real world.
It's entirely possible to preserve (and in fact would actually strengthen) the profit motive if we changed incentives to get rid of the wild capital hoarding we see today.
Money is made up system to provide a relatively stable society; if that stops working it's not good; violence becomes what's left.
Maria Sam Antoinette and brethren saying let them eat cake (or everyone will just build new things with (our) AI) without a sense of what is happening / about to happen to the broader populous is on a similar track.
The "billionaires" should use their influence to help with this transition invest figuring out how these new system will work.
No one should care if that means more "millionaires" vs less billionaires these numbers as social constructs; the point is power and self determination. History shows lacking that for too many will breakdown to broad violence and or dystopic robot overloads guarding a diminishing small and isolated elite.
The time to course correct is now.
Personally I wager society would be better if the excess wealth of billionaires was simply deleted, or burned. It would be better yet if that wealth was used in our shared funds to build common infrastructure and services. Leaving such wealth in such few hands is really the worst you could possibly do for society.
That way you get somebody with a proven track record of building big projects who is also motivated by money, so the common infrastructure and services is handled competently.
Similarly properly regulate the gig economy.
And actually pay servers properly so that they don't have to rely on tips?
The today's life is enshittified by thousand cuts ... why not fix them?
All that is required is a legislative body that is not bought by big $$$.
Because it is undemocratic, ripe for corruption and abuse, will never work in practice (as the rich will inevitably find ways to game the system). What you are describing is basically just aristocracy, where the rich get to decide what is best for the rest of us.
1. Billionaires shouldn't wield lots of wealth, because it's scary.
Sticking to that concept makes the discussion a lot clearer. Never mind concept 2, it's haunted by the futile spirit of Marx and he's throwing crockery around.
What will happen with this taxation is that if everybody makes the same income, then everybody pays 50% in tax. If some rich dude is making a lot more money then everybody else, they will lower the tax for everybody else while paying a lot more them selves. At some point (say 3 standard deviations above the mean) you end up getting less after taxes then had your income been lower (say 2 SD above), in other words, the limit is 100% tax for extremely high incomes (and 0% for extremely low incomes). That is, I favor a system that has maximum income, and you are actively punished for making more.
In general, this is total bullshit. But in the particular, Job made his first billions from selling Pixar to Disney, not from Apple.
Just wait ... in two weeks ...
That is, it's not hard to see why so many main streets in smaller towns have boarded up retail stores since you can now get anything in about a day (max) from Amazon. But Amazon (and other Internet giants) always played at least semi-plausible lip service that they were a boon to small fry (see Amazon's FBA commercials, for example). But you've got folks like Altman and Amodei gleefully saying how AI will be able to do all the work of a huge portion of (mostly high paying) jobs.
So it's not surprising that people are more up in arms about AI. And frankly, I don't think it really matters. Anger against "the tech elite" has been bubbling up for a long time now, and AI now just provides the most obvious target.
I'm pretty much only thinking about these kinds of problems at my job at this point, so this is important to me in that regard
Just a thought, what do you think?
If there were another party involved, that would (hopefully) diversify power that (potentially) comes with those streams of data.
It’s a bit ironic that the USA has mostly abandoned interoperability after being one of the pioneers with the American manufacturing method. [0]
[0]: https://en.wikipedia.org/wiki/American_system_of_manufacturi...
It seems like a lot of people want a revolution so that they can rotate who will be able to take advantage of the vulnerable.
What are the suggestions for something better? I don't see a lot.
I'd like to see more suggestions of how things could work.
For example:
The Government could legislate that any increase in profits that are attributable to the use of AI are taxed at 75%. It's still an advantage for a company to do it, but most of the gains go to the people. Most often, aggressive taxation like this is criticised on the basis that it will stifle growth, but this is an area where pretty much everyone is saying it's moving too quickly, that's just yet another positive effect.
The response is "we don't believe you" because their actions show that they are hellbent on accelerating inequality using AI and they have offered absolutely no concrete plan or halfway convincing explanation of how, if their own predictions of AI's future capabilities are correct, we're supposed to go from here and now to a future that isn't extremely dark for the vast majority of humans on Earth (to the extent that said humans continue to exist).
The work they have done in this direction so far is not serious, so it's not taken seriously. They obviously care much more about enriching themselves than slowing or reversing current trends.
If they want to be taken seriously, maybe they should start acting like they're serious about anything besides their own wealth and power. And I do mean acting---they need to show us through their actions that they are serious.
Alternately you could criticise their arguments instead of the people, and suggest an alternative.
I'm also not entirely certain that influencing public policy is something that is inherently bad. I know if I were deaf, I would like to have some influence on public policy about deafness issues.
You are arguing the opposite, that we should judge by what they say and not what they do?
Why is OpenAI not a nonprofit anymore?
Michael Bloomberg has lobbied for healthcare.
Pierre Omidyar has spent about a billion on economic advancement non-profits
Gates Foundation - Bunch of stuff.
Warren Buffet - Too much to count
George Soros - For all the antisemitism, the kernel of truth in the lie is that he spends a lot of money trying to make the world better.
Chuck Feeny gave away $8B I'm sure some of it went to lobbying for better policies
A large number Advocate for a Universal Basic Income.
More advocate for things that they clearly think are good things for the world, even if you, personally do not.
Jack Dorsey, Reid Hoffman, hell even Elon Musk (he may be wrong about everything, but he's openly advocating for what he believes is good)
Sam Altman has done WorldCoin and is heavily invested in Nuclear Fusion. You can criticise the effectiveness or even the desirability of the projects, but they are definitely efforts that if worked as claimed would be beneficial.
Many billionaires spend money on non-profits to push for change, often they do not put their name on it because it makes them a target for attack, or simply that by openly advocating for something the lack of trust causes people to assume whatever they suggest has the opposite intention.
I'm not arguing that they are doing the right thing. I'm arguing that for the most part they are advocating for and investing in what they believe to be the right thing. Why treat them as the enemy, when a dialog might cause them to reach common ground about what is the right thing.
People like Elon literally are the enemy. He used his wealth to literally change our government in his favor. The idea that we need to go and have polite discussions to maybe change his mind, while he gets to stomp all over us (his DOGE efforts literally resulted in people dying). If a dialog with them was going to work it would have happened a long time ago, but the more we learn about these people the more obvious it is that they believe themselves to be smarter and better than the rest of us. They aren't going to listen to others, and pretending that they will seems like deflecting and giving up in advance. Our best hope is that people can get enough power to regulate billionaires out of existence before a revolution does it instead.
Because it IS an us vs them situation.
They're awfully good at turning it into an us vs us situation whether it's blaming our parents' (boomers), blaming immigrants, blaming muslims or (their favorite), blaming the unstoppable forward march of technological progress (e.g. AI).
The media organizations they own are constantly telling these stories because it protects them.
>The Government could legislate that any increase in profits that are attributable to the use of AI are taxed
Nothing a billionaire loves more than misdirection and a good scapegoat. This is why Bill Gates made the exact suggestion you just did.
https://finance.yahoo.com/news/bill-gates-wants-tax-robots-2...
When THEY are the problem they love a bit of misdirection, especially when the "problem" is a genie that cant be put back in its bottle.
They're terrified that we might latch on to the solutions that actually work (i.e. tax them to within an inch of their life) and drive a populist politician to power which might actually enact them.
Tax AI is the answer.
If you want to use LLMs, you can either use cloud resources at what I think are really reasonable per-token prices compared to the value, or to set up your own server with an open-weights model at a comparable level of quality (though generally significantly slower tokens/s). In any case, you absolutely don't have to pay OpenAI/Anthropic/Google if you don't want to.
But are you expecting 360m Americans to start their own businesses? That is a solution that doesn't scale. Consumer grade GPUs aren't going to scale all that much either, and the cost of the models are going up rather than down as vendors start seeking profits. We already see the memory and storage markets exploding in cost due to the rise in demand as well.
It's never been a worse time for the poor or middle class to think about starting their own business. Prices on everything are rising, it's getting to be a struggle for even the middle class to continue to afford their homes. Healthcare is even more fraught than ever before, and if you're lucky enough to have a decent plan from your employer, aint no way you're going to give it up to go start a business.
I do not. I grew up on post-scarcity utopias like Star Trek, coupled with social capitalism, and believe that when there is a market need, people with the interest to tackle it will do so, even in the face of personal financial risk, but I absolutely don't think that it should be the default for everyone. Where there's no strong economic benefit for others to work, I would hope that we could offer everyone UBI, so that a comfortable basic level of life is available for everyone, without having to invent bullshit jobs that aren't needed.
I know I sound naive, but I truly believe that we can move into a future where human value is decoupled from their job, without going into communism.
Now that all takes place in China. With layers of middle men who collect arbitrage between you and the Chinese manufacturers they connect to you. With tariffs. Weeks of international shipping. Enough volume of orders to justify international shipping at all. Enough production capacity ordered to even be worth while making your thing versus larger orders from around the world all being made in china.
I would rather claim that this is a proper description of shadow libraries [1].
[1] https://en.wikipedia.org/wiki/Shadow_library
Huggingface, Swartz et al have done more social/political good for this world than billions have.
Even local AI concentrates power in the hands of a few, the few who can afford the hardware to run it, and the few who have the luxury of enough time and energy to devote to engaging with the intricate, technical rabbit hole of local models.
"Joel, you look like a smart kid. I'm going to tell you something I'm sure you'll understand. You're having fun now, right? Right, Joel? The time of your life. In a sluggish economy, never ever fuck with another man's livelihood."
I'm not sure this moat is inevitably perpetual. It's likely computing technology evolves to the point of being able to run frontier-level models on our phones and laptops. It's also likely that with diminishing marginal returns, future datacenter-level models will not be dramatically more capable than future local models. In that case, the power of AI would be (almost) fully democratized, obviating any oligarchic concentration of power. Everyone would have equal access to the ultimate means of production.
You are right that AI can be a fully democratized commodity. The problem is that the current wealth inequality is not the result of AI. Musk became a trillion seeking oligarch not because of AI. It is because the entire financial system is designed to extract wealth from everyone and concentrate at the top. Democratic AI is not in their interest. There will be violence, but not because AI is supposedly a catalyst of inequality. It will be violence from the rich towards the poor, because democratic AI is not acceptable for them.
Unless the rich somehow manage to completely stifle the progress of consumer-level computing advancement (all chip manufacturers would just collude to quit selling to consumers?) and exert an iron-fisted control over the dissemination of software (when has this ever worked?), I'm not sure how they could control the democratization of AI.
Well, someone with money could go buy 100% of RAM production for the next 3 years.
There's been ongoing class warfare happening for centuries, but only the rich side is firing the bullets. The rest of us are just standing in the front lines getting shot. AI is just another type of gun for their army.
The vast majority of individuals derive no value from AI, they are instead told to do their jobs faster and own the mistakes of the AI for flat/declining pay. It's a bad deal for most people.
This statement is not decoupled; if anything, it is a more generalized one, as it does not point at any cause or causes for livelihoods to be taken.
there is something else that needs to change which everyone is reluctant to admit, or struggling with internally.
thats ok, its called conscious evolution. it hurts, but it will be ok someday. its generational, so progress is always slower than one would hope. Just know that every step in the right direction is one, even if the entire world seems to disagree keep pushing for what you beleive is right, and hopefully thats something which is not infringing on other peoples capacity to live a happy life.
Do you make this distinction that it's not the AI that is doing this to us so that you can be more clear in where to target your ire, or are you making the distinction so you can continue to use LLMs with a clear conscience?
Eliminate the AI variable entirely and the problem remains, therefore AI is not the problem.
Judging by the gleeful texts of CEOs, collapsed hiring, internal policy changes and pushes, and the additional decades of centralized political control, it's clear this is going to be even worse..
My own take goes that one step further, as I said in a prior comment rebutting Altman’s whinging blog post:
> Your staunch refusal to heed the critiques of those you harm means that these outcomes were inevitable; not acceptable, not justifiable, but inevitable nonetheless. In a society where two full-time working adults still cannot afford a home, or children, or healthcare, or education, your insistence upon robbing them of their ability to survive at all is tantamount to a direct threat of violence against them. Your insistence that the pain is necessary, that others must clean up the messes that you and your peers are willfully creating, is the sort of behavior expected from toddlers rather than statesmen.
The problem does not lie with technological innovation itself, so much as the powerful humans behind it leveraging it for selfish ends without the consent of the governed. Violence becomes inevitable when people see no alternative, and necessary when the stakes are kill or be killed, as AI is currently steered towards. That’s not to condone the actions of the alleged perpetrators so much as it’s highlighting the litany of historical examples around such transformations and the effects violence has in forcing a peaceful compromise in most (but not all) cases. The New Deal couldn’t have happened without the decades of preceding strikes, protests, and government-sanctioned violence against workers; the violence made it impossible to ignore or delay any further, and the result was outing corporate entities who had been stockpiling chemical weapons and machine guns, so fierce was their opposition to sharing the products of labor with the workforce. AI already has the weapons, it has the surveillance apparatus, the government backing; violence is presently the sole recourse left to a growing number of people, because they know they’re an obstacle to the powers that be - and will be destroyed, lest they strike first.
That’s the real story, here, and those who haven’t lived in the gutters of society cannot possibly understand the desperation of those victimized by it in the name of greed.
I think that framing at is "the system is set up this way" reads too passive. It reads as if it excuses the likes of Sam Altman, Mark Zuckerberg, Jeff Bezos, Peter Thiel, Larry Elisson among others being despicable sociopaths whose carnage inflicted upon society for pure selfish reasons needs to justifiably be treated as treason against society, with the obvious rightful consequence.
No, things that aren’t violent are not violent, speak to anyone with experience of violence.
A recent example is the domestic abuse victim in my complex who has setup private surveillance cameras in the indoor common areas that are heavily trafficked by other neighbors, none of whom have given their consent. She does not consider warrantless surveillance of others (or calling the police on those of us who do not wish to be surveilled in a secure area of the building by her personal cloud camera) to be a violent act, nor does she consider threats of calling the police on those who shield themselves from her camera’s view to be an act of violence.
Violence is not limited to physical actions that induce physical harm, it is any action intentionally designed to reduce the safety or security of others - physical, mental, fiscal, political, etc.
Yes it is. Safety is also physical. People deserve not to be beaten, they don’t deserve not to be mentally challenged.
AI (and computing technology in general) is an alien as it defies all wordly norms. It can have exact identical copies, can replicate, can exist everywhere, communicate across huge distance without time lapse, do huge work without time lapse, has no physical mass of it's own,, no respect for time, distance, mass and thinking work, not a living thing but can think.... Just the perfect alien creature qualities.
Why are they allowed to invade Earth? The business goals, of course. To get a temporary edge over the competitors, until they acquire the same. But once everyone has the same Ai, there is no going back. Ai has established itself through the weak channels that are filled with greed, that can bribed by giving toys (business edge), in return to the keys to the dominance of human race.
You're not thinking long-term. What happens when AI is put in charge of systems that interact with the physical world?
One thing that the whole AI debate has shown to me is how many people completely lack any sort of imagation.
What about diseases which killed up to 95% of the population? I think you are basically correct, except for the historical analogy.
How's that playing out in the Middle East in 2026?
Why this wasn’t done is I think the big mystery and lends credence to the idea of spaniards having significant force numbers through allies.
Because I think that seems virtually inevitable at this point.
“Volent” is the problem there. Whose fault is it that someone was tricked by a boy?
Not true. Overwhelming technological advantage also works. As Hilaire Belloc put it:
The AI arms race is a race for that kind of advantage. Whoever wins (assuming they don't overshoot and trigger the "everybody dies" ending) becomes de-facto king of the world. Everybody else is livestock.The open models seeming to be ~6 months behind is very encouraging, too.
Compute is a limiting factor now, but there have already been huge improvements in compute efficiency, e.g. mixture of experts. It seems extraordinarily unlikely that there are no more to be found. And compute capacity continues to increase too.
And the massive amounts of people (software engineers, lawyers, doctors, etc) currently being paid as contractors to help train the next AI models. They're essentially the inviting natives who are being paid in trifles to tell them the secret ways of the natives farther inland. Sucking out all of the tribal knowledge of the industry like a vacuum.
AI doesn’t actually come from the outside.
The fact it’s economics have high winner-take-a-lot aspects, doesn’t mean you can eliminate the current winners and end up anywhere different, because it’s actually a natural decentralized progression of improving efficiency.
So that framing makes no sense.
However, the thesis for the potential for violence is sound. I don’t see a way out of that, given unending disruption, with no coordinated responsible response.
I do not think is this essay is hype.
This moment requires great leadership and competence, but that is not what is getting elected.
The last two decades patience with massive businesses scaling up profitable conflicts of interest, and centralizing gatekeeper and dependency powers, that offer no recourse to any individuals they mistreat, strongly suggest we are incapable of dealing with AI fallout. Which will only accelerate and add to those trends.
The entire argument lives and dies on one move: calling AI an “alien.” And it’s not even consistent. It starts with “alien” as in foreign invader, then quietly upgrades it to “space alien,” and from that point on everything just inherits whatever sci fi trait sounds dramatic. That’s not reasoning, that’s a word doing a costume change and dragging the argument along with it.
And honestly, the quality of comments on HN feels like it’s been tracking the broader decline in cognitive performance. The long running Flynn Effect has stalled or reversed in parts of the US. Some datasets show small but real drops in IQ related measures over the past decade. You read threads like this and it’s hard not to feel like you’re watching that play out in real time.
That explains the prolific AI use as incompetent agencies like the DoJ, DOGE, and others under the current administration
For example, the flying jenny, overnight, basically put an entire craft industry of weaving into question. Probably more dramatically than anything Claude Code ever did.
It took A LOT and several world wars for brief periods of normalcy post WW2 - probably the exception, not the rule.
1 - https://industrialrevolutionspod.com/
But what AI is selling is the obliteration of human knowledge work.
It just isn’t informative for that.
It’s like saying:
“Nobody should learn about the collapse of societies, because my society is different”
“Nobody should learn about the history of tariffs, because rhe US in 2026 is unique”
It’s not either / or but shades of applicability to today. Things don’t just 100% apply or not apply.
We have a rich history of what happens when we automate labor. Even work people considered “knowledge work”. Each time was different. That doesn’t meant there’s not shared patterns to learn.
We may have hindsight bias in evaluating something that happened, but to the people that it happened to it was terrifying.
The industrial revolution first attempted to replace our hands. But the labor that was displaced had places to go: into smaller-scale manual work, where mass-production machinery was too expensive, and into knowledge work.
Now the AI is coming for knowledge work, and robots are getting better at small-scale work. We're not at that point yet, but looking down the road I'm not sure there will really be anything competitive left flesh-and-blood humans can offer to an employer.
The only exceptions I can think of are, maybe, athletics, live music performances, and escort services. But with only a few wealthy people as customers, I don't think there will be many job opportunities even in those fields. And I'm not sure that robots won't come for those too.
Nobody had any idea what was coming with the industrial revolution. There wasn't obviously other work for people. And for long periods of time nobody had an answer to that question for large percentage of the population.
In hindsight, we know the answers NOW, but then they did not know what was going to happen. We also don't know what's going to happen, it could go as you hypothesize. Or the Jevon's paradox people might be right and there's way more work to do.
The uncertainty is the historical lesson, not that "it'll all work out"
I don’t get it.
Yes, two hundred years ago, many people thought reading was a dangerous distraction for young people, just as film, radio, TV and the internet became later. But there is a qualitative difference to having social media in your pocket with vibrating notifications. Pretending its just more of the same honestly feels like slightly willful blindness at this point.
Much of that got obliterated by automation.
History doesn’t repeat itself, but it certainly rhymes
You as a blue collar machine operator, shoving punch cards in and getting answers out, is precisely what your boss always saw you as, or wanted you to be.
Our necessity as pseudo-craftsmen holding an intellectual high ground and wizardly/magical skills was always resented by investors, owners, and sometimes customers.
Blacksmithing and leather tanning and shoe making and seamstressing and furniture making was human knowledge work, too.
The Alvin Toffler stuff was always bullshit, but it's even more bullshit now.
A closer comparison to Sam Altman might be Edmund Cartwright (inventor of the power loom that automated weaving). The Horsfall and Altman situations differ in that Horsfall was a factory owner but didn't create or organize the teams that built the stocking frames. There was also an attempt on Cartwright's life as he was out riding. But like Altman and unlike Horsfall, he wasn't killed.
When most engineers and Marvel fans watched Tony Stark in Avengers collaborating with Jarvis they thought of Jarvis like "an AI with Google's knowledge where I can interact with him". It's true that we're close to that level interaction. However, the ultimate goal is to get as much as possible automated on Jarvis, to the point where Tony Stark is not needed or Tony Stark can be replaced by anyone with a mouth.
In this example, Jarvis isn't the goal but a checkpoint. The goal is a genie, providing software and research to anyone who is loaded with money, and knows how to rub the metaphorical lamp the right way.
Not only that, but by how blatantly and openly these owners are discussing the tool's power. They are publicly crooning about their product's ability to replace workers. It's the first line of their sales pitch. And also, their customers (business CEOs) are publicly crooning about how awesome it is that they can reduce their headcount! Both the AI producers and their customers are absolutely bragging about worker displacement, and not a single guillotine has been constructed in the streets yet.
Personally, the tools don't need to change hands at all. They are already in the hands of people who are deploying them at a scale to serve goals I cannot and do not support
The people running AI companies right now are some of the most evil motherfuckers on the planet
If we thought of all of this as 'stochastic data systems' then our heads would be in the right place as we thought about it just as 'powerful software' that can be used for good or bad purposes, and the negative externalizes will be derived from our use of it, not some inherent property.
Cryptocurrency is an interesting technology with some niche use cases, but it was pitched as replacing the entire money system. LLMs are extremely useful for certain types of work, but are pitched as AGI ending all work. Etc.
Labour displacement leads to an erosion of standards of living and in a world that ties purpose to work is an existential threat on a very practical level.
It was always going to be met with violence once it became more than a curiosity for tinkerers.
a) Decouple the value of human life from labour.
b) Watch as the value of human life rapidly approaches zero.
---
Though I'd expand this by adding "technically alive" is not a very good standard to aim for. Ostensibly we're already heading for something like poverty level UBI + living in pod + eating the proverbial bugs. We need a level above that!
A great exploration of the pitfalls of "preserve humanity" as a reward function is the video game SOMA. I think you also need "preserve dignity" to make the life actually worth living.
(Path `a` is not without its pitfalls: what lack of survival pressure might do to the human culture and genome, I leave as an exercise for the reader! But path `b` I think we already have enough examples of, to know better...)
You forgot C: Butlerian Jihad. mass outlaw AI research, AI usage, AI building, AI infrastructure, on penalty of death
It may not be a good option but it's there
The two biggest labor displacements in human history were the agricultural and industrial revolutions, both of which resulted in enormous gains in human living standards. Can you think of a mass labor displacement that resulted in an overall erosion of living standards? I cannot.
Then there's the minor issue of AI deciding to just wipe us out because we're in the way.
Taking everything together, AI more powerful than that which currently exists must not be created. This needs to be enforced with an international treaty, nuking data centers in non-compliant states if need be.
How much truth there is to it we don’t know for sure. But it’s not something to be ignored.
— in the 1960/1970s, when compilers came out. "We don't need so many programmers hand-writing assembly anymore." Remember, COBOL (COmmon Business-Oriented Language) and FORTRAN (FORmula TRANslator) were marketed as human-readable languages that would let business professionals/scientists no longer be reliant on dedicated specialist programmers.
— in the 1980s/1990s, when higher-level languages came out. "C++ and Java mean we don't need an army of low-level C developers spending most of their effort manually managing memory, and rich standard libraries mean they don't have to continuously reimplement common data structures from scratch."
— in the 1990s/2000s, when frameworks came out. "These things are basically plug-and-play, now one full-stack developer can replace a dedicated sysadmin, backend engineer, database engineer, and frontend engineer."
While all of these statements are superficially true, the result was that the world produced more software (and developer jobs) than ever, as each level of abstraction freed developers from having to worry about lower-level problems and instead focus on higher-level solutions. Mel's intellect was freed from having to optimize the position of the memory drum [0] to allow him to focus on optimizing the higher-level logic/algorithms of the problem he's solving. As a result, software has become both more complex but also much more capable, and thus much more common.
While this time with AI may truly be different, I'm not holding my breath.
[0] http://catb.org/jargon/html/story-of-mel.html
Literally the same thing.
> humans will be economically obsolete and worthless
Only if we are talking about a socialist system (and they are making pretty small progress in the field of AI). A human's value under a capitalist system is equal to their ability to create goods and services. And AI cannot make this ability smaller in any way.
A people's well-being is literally the goods and services created by that people. How can it decrease if the people's ability to produce those goods and services is not hindered in any way?
So, when it comes to the entire nation benefiting from AI, the most important thing is to preserve capitalism, and then the free market will distribute all the benefits. The main danger is a descent into socialism, with all these basic incomes, taxation out of production, and other practices that would lead to people being declared economically obsolete and mass executed to optimize their carbon footprint or something.
Yes they can. Your ability to produce goods and services depends on the infrastructure around you. When that's all run by AIs for AIs, humans won't be able to compete.
See that land over there producing food you need to eat? It turns out it's more economically efficient to pave it over with data centers etc.
Under a US-style capitalist system the rich (i.e. the AIs and AI-run businesses) control politics, the courts, etc, so the decisions the system makes will favour AIs over humans.
> So, when it comes to the entire nation benefiting from AI, the most important thing is to preserve capitalism, and then the free market will distribute all the benefits
...to the AI-run companies!
> The main danger is a descent into socialism, with all these basic incomes
Without UBI most people (or maybe everyone) would starve.
Yeah, and who is creating those infrastructure? Jesus? This is the same part of goods and services.
> When that's all run by AIs for AIs, humans won't be able to compete.
So what? The ability to produce goods and services (and therefore general well-being) will not decrease because of that.
> It turns out it's more economically efficient to pave it over with data centers etc
By the way, a good argument against your position. Agricultural land is very cheap, but the vast majority of people who believe AI will put people out of work and worsen overall well-being are for some reason reluctant to buy this asset, which would see a catastrophic increase in value under such a scenario. So these people are either incapable of analyzing the economic processes, and their predictions are worthless, or they don’t really believe in such a scenario.
> will favour AIs over humans
Let me repeat: it does not reduce the ability to create goods and services. Under capitalism, this is the only characteristic that determines people's well-being.
> ...to the AI-run companies!
I think this is a fairly unlikely scenario. But even in this very unlikely case, people's well-being will not be reduced. Simply because of the mechanisms of creating well-being.
> Without UBI most people (or maybe everyone) would starve.
Economic theory (and 20th-century economic practice) demonstrates the exact opposite. In every country that attempted to effectively implement UBI, it led to a sharp decline in production and mass starvation. Literally every single time.
I don't disagree that we tie purpose to work and severing that tie will have negative societal consequences, but it is far more impactful that we tie the ability to continue to exist to work (for anyone not lucky enough to already be wealthy).
If I suddenly became unemployable tomorrow I'm positive I could find alternate purpose in my life to fill that gap, I already volunteer for various causes and could happily do more of the same to fill in the gaps left by lack of work. What I couldn't do is feed myself, keep myself housed, and get medical care (especially in the US, where this is very directly tied to work).
The really big fuckup we are committing as a society in the US (may or may not apply to each person's country individually) isn't just this looming threat of massive labor displacement due to AI, it is that instead of planning for any sort of soft landing we are continually slashing what few social safety nets already exist. We are creating the conditions for desperation that likely will result in increasing violence as outlined in the linked post.
Think of the alternative, though: If we planned for a soft landing and implemented safety nets and started transitioning ourselves to a society where people didn't have to work to survive, then a few trillion dollar companies would make slightly less profit every year. We simply cannot allow that. Won't someone think of those trillion dollar companies for a minute?
That's not to say we should just throw up our hands and accept every social injustice. But IMHO we shouldn't go around simplistically implying that all social ills will be solved by neutering the billionaire class.
You’re right. Instead of implying, we should be taking active steps to do it.
Not to put too fine a point on it but this was basically how the Japanese post war economic miracle was achieved.
In this case it was America which ordered the Japanese oligarchy to be stripped of its wealth.
We've had decades of propaganda telling us that this is the worst thing we could do for economic growth though so it's natural to doubt.
The biggest problem we currently have with billionaires is that they are now so rich that the world becomes like a game to them and some of them are deliberately pushing us to a dystopia where non-billionaires become functional slaves (c.f. Amazon workers).
[1] https://en.wikipedia.org/wiki/Utility_monster
It has nothing to do with society; there is infinite demand for medical care. The upper limit is whatever it takes to live until the universe's heat death in good health. That takes a lot of resources.
However much society spends on medical care, there is always more that could be spent. The modern era has the best, most affordable medical care in history and people are showing no signs of being satisfied at all.
While war spending generally just causes pain for no gain it doesn't change the fact that there will never be enough available to satisfy people's demand for medical care. Every single time people get what they want they just come up with a new aspirational minimum standard.
Humanity has taken control of its own evolution and no longer relyies on natural selection to be the driving force for change. Using evolution as an excuse to make bad and immoral choices is a poor argument and should be left back in the stone age.
Has it taken full control of it or just partial control?
Soviet Union lost due to an inferior societal model, but this too is too much along what once was a relatively sustainable path. The American dream is now a parody of itself, as it takes more to end up with the rest of them, I could go on about the irony of wanting to escape the pit but not wanting to acknowledge the pit is the 99% of the U.S. -- Not Altmans, Bezos'es, Musks or Trumps or their hordes of peripheral elites.
Point being, the model doesn't work _today_ with its cancerous appetite and correspondingly absurd neglect of the human, _any_ human. We can't have humanism and the kind of AI we're about to "enjoy".
The acceleration of wealth disparity may prove to be nearly geometrical, as the common man is further stripped of any capacity to inflict change on the "system". I hope I am wrong, but for all their crimes, anarchy and in a twist of irony -- inhumane treatment of opponent -- the October revolutionaries in Russia, yes bolsheviks, were merely a natural response to a similar atmosphere in Russia at the turn of the previous century. It's just that they didn't have mass surveillance used against them in the same capacity our gadgets allow the "governments" today, nor were they aided by AI which is _also_ something that can be used against an entire slice of populace (a perfect application of general principles put in action). So although the situation may become similar, we're increasingly in no position to change it. The difference may be counted in _generations_, as in it will take multiple generations to dismantle the power structures we allow be put in place now, with Altmans etc. These people may not be evil, but history proves they only have to be short-sighted enough for evil to take root and thrive.
Sorry for the wall of text, but I do agree with the point of the blog post in a way -- demanding people become civilised and refrain from throwing eggs (or Molotovs) on celebrities that are about to swing _entire governments_, is not seeing the forest for the trees.
There's also no precedent in a way -- our historical cataclysms we have created ourselves, have been on a smaller scale, so we're spiraling outwards and not all of the tools we think we have, are going to have the effect required in order to enact the change we want. In the worst case, of course.
Violence is not a panacea, but often, the outlet.
Yes we all (majority of sane) people know that violence is not the answer yada yada yada. Doesn’t matter. It will happen anyway. Saying “it shouldn’t happen, it does not solve X” will not stop it to becoming an outlet for frustrated people.
Actually violence is the ultimate power. It is where true power comes from — you can gain true power by hurting other people or/and benefiting other people, and it is always the power to hurt people that is the greater of the two.
A well run government wraps violence behind a curtain and jealously guard it. For example most modern governments look down and punish private vendetta because the state is only the one that can hurt people legally. But if the people believe that the government is biased or don’t care about them, then they will resort to violence, the ultimate power.
It’s true that you or I aren’t likely to do anything about school shootings. But I’m not sure it follows that nothing can be done.
Allow a handful of people that grab the economy and all means of production and violence will be the result.
At this point in time it is simply cause and effect, the surprising thing to me is how long it is holding together. But at the rate the economy is being wrecked I fail to see how it will do so for much longer.
Effectively the French elites started the French revolution by being a little bit more greedy than the population would have tolerated. That set off an avalanche of what were effectively a series of mini revolutions ultimately resulting in modern France, which is in many ways unlike any other country in the world. The United States had its war of independence (aided by France, by the way), and then its civil war. But it never had a class war - yet - and this article presages that class war.
It could well be that the small number of rich people that are currently effectively a government outside of the government genuinely believe that their wealth and power insulate them from the consequences of pushing their greed and wealthy to ridiculous levels. But I suspect the author is right in that this is approaching some kind of threshold and I have no way of seeing across the divide, I'm hoping for another France rather than another Somalia.
This couldn't be further from the truth.
History demonstrates categorically that violence is the last and most reliable form of recourse available to the disempowered, once society has trended too far towards either an excess of freedom or an excess of equality. And, in fact, our position in that balance between freedom and equality is perpetually oscillating, tending to finally reverse direction only in response to violent revolt.
This cycle has repeated over and over, essentially since the dawn of civilization. This was among the most important insights of 'The Lessons of History' by Will and Ariel Durant. And it's baked on two very simple insights about human nature: (1) those in power rarely give it up willingly (they often do the opposite) and (2) fear, on average, is and always will be a far stronger motivator than appeals to a person's conscience.
Violence - specifically violently destroying society as it stands now - is often the goal. AI is an excuse.
Meanwhile
https://www.reuters.com/world/middle-east/how-many-people-ha...
> U.S.-based rights group HRANA said 3,636 people have been killed since the war erupted. It said 1,701 of those were civilians, including at least 254 children.
(Mentioning this specifically because we know the DoD is using AI)
What about the other side? Do you expect the ratio to be different in the last 25 years?
Coincidentally that's literally the exact same evidence cited to prove the existence of Saddam's WMDs just before launching an entirely different unprovoked attack.
That was just an unhappy mistake though, this time it's totally legit.
Let’s not parrot that media propaganda.
Iran has admitted outright to 6k deaths, by the way.
The US must have several dozen spy satellites pointed at Iran. We get various imagery to show us successful strikes. Where are the images of the mass slaughter in the street?
The number I keep seeing is 30k killed. That's not an easy endeavor over the course of a week without big logistical hurdles. The trucks, the digging equipment, the furnaces to burn the bodies, all should have some visible trace that the US gov could point to as proof.
Yet all we got is a "trust me bro".
WMD all over again.
or just arguing over 20K,30K,50k?
Just want to clarify. Since some people argue Covid never happened, and some just argue the total deaths wasn't really that high.
There is a sliding scale between "I sound like a raving crazy person", and "i'm just splitting hairs."
The fact that we're using AI killer robots to wipe each other out in droves doesn't bode well for that future does it...
Why do we watch Olympic runners, when cars on your average city street easily exceed Usain Bolt's top speed on their morning drive to Starbucks? Why do we watch the Tour de France, when we can watch Uber Eats drivers on their 150cc scooters easily outpace top cyclists? I'm sure within a couple years a Boston Dynamics robot will be able to out-gymnast Simone Biles or out-skate Surya Bonaly. Would anyone watch these robots in competition? I doubt it. We watch Bolt, Biles, and Bonaly compete because their performance represents a profound confluence of human effort and talent. It is a celebration of human achievement, even though that achievement objectively pales in comparison to what our machines can accomplish.
I think the same is true for other aspects of human creativity and labor. As we are able to automate more and more, we will place increasing importance on what inherently cannot be automated: celebration of our fellow humanity. Another poster wrote that "bullshit jobs" [0] exist primarily because we value human contact [1]. I am inclined to agree.
[0] https://en.wikipedia.org/wiki/Bullshit_Jobs
[1] https://news.ycombinator.com/item?id=47738865
When chess engines started becoming really good, some people worried that competitive chess would die. Today, grandmasters stand no chance against a smartphone, and yet, chess popularity is at an all time high.
(https://news.ycombinator.com/item?id=47587863) A comment I had written sometime ago. Aside from a very few at the top, I have seen some chess players regret in a very nostalgic way.
The chess industry continues to allege against each other and we lost a star (Rest in peace, Daniel Naroditsky) because of it. The current world champion himself is struggling from all the pressure put on a 19 year old boy.
We enjoy playing against each other but man it is competitive if you wish to feed families.
Most of us play chess out of leisure. I am unsure how a world where everyone does something akin to chess competitively (ie. for money, as we wish to feed our children and ourselves) would look like.
One can say something similar to UBI might be needed and then we all play chess in leisure, but I don't think that is what most people propose when they mention the example of chess.
Big sports events are the "circenses" part of "panem et circenses" [1]. Fun fact concerning this: the German word for "entertainment" is "Unterhaltung"; thus it can be argued that the purpose of entertainment/Unterhaltung is "unten halten" (to keep at the bottom), i.e. to keep the mass of the populace at the bottom, or in other words: to prevent the mass of the populace from coming up.
> Would anyone watch these robots in competition?
I have seen robot fight competitions both live and in videos, and I have to admit that these are not boring to watch.
So yes, with a proper marketing I can easily imagine that lots of people would love to see broadcasts of some robot competitions.
--
[1] https://en.wikipedia.org/wiki/Bread_and_circuses
All of those sports make intuitive sense to me, I really don't get why we make such a big thing of balls though.
F1 is somewhat about which company can build a better car. But any real improvements seem to invariably lead to a rule change that bans that improvement in future seasons. So you are back to drivers being the most visible differentiator
So, sure, there will be space for some human achievement for the sake of it, but, most fewer and fewer people will make a living off that.
Olympic Athletes are a combination of luck in the genetics department and a lot of effort, but ultimately do not seem to be sufficient to help the athletes themselves.
They are not "bullshit jobs"
They will become so only after the day when AI "help" and "support" is actually better than talking to a human.
Which is not happening anytime soon, possibly never. Call me when it happens
We haven't needed the overwhelming majority of human creativity. We still paint and play guitar even though it has no economic value. I think we'll continue to do these things regardless of AI.
> and work
This is another story.
There's still space for creativity, novelty, invention and human intuition.
40 years ago, there was a market for:
.. sure, there are still people with newspaper subscriptions, or DSLR cameras. But it's become a niche market. Those things have been replaced by your phone and a "free" service.Same thing will happen for all the other markets that AI will gradually eat. Sure, you can find a human that can do better. But that costs 90$ / hour and requires finding someone, negotiating a contract, etc. But when people can do something good enough in 30 seconds with something they already have access to, and move on with their life, then that's what they'll do.
So just raising the floor will have a big effect on society.
Quit snorting amphetamines and check yourself into rehab.
Passed some point, if you are good at what you are doing, the AI will stop helping and become a burden, because you will want precise control, and AI in its current form (deep learning) is not good at it.
There is a reason we talk about "AI slop", you simply cannot let an AI make creative decisions and expect a good result.
By creative I don't just mean artistic. For code, AI works for the least creative tasks, like ports, generic-looking CRUD apps, etc...
As for work, we have already eliminated most of the need for human work. By "need", I mean survival: food, shelter, these kinds of thing. Most of human production goes to comfort, entertainment, luxury, etc... We will find stuff to do that isn't bloodshed. In fact, as times went on, we spend more on saving people than killing them, judging by a global increase in life expectancy. Why would AI reverse the trend?
Yeah, this is not happening anytime soon. Have you even looked at AI-generated code or text? AI is just a dumb parrot, it's no match for human effort and creativity even in these "easy" domains.
The business case for AI generation is just being able to generate huge amounts of unusable slop for next to nothing. For skilled workers it's a minor advantage in that they get a sloppy first draft that they can start the real work on - it makes their work a bit more creative than it used to be, by getting rid of the most tedious stuff.
You really need to look again. If you're still manually writing code you have your head in the sand.
AI can produce better code than most devs produce. This is true for easy stuff like crud apps and even more true for harder problems that require knowledge of external domains.
I'm not sure about other devs, or even their number, but AI can most definitely NOT produce better code than I can.
I use it after I have done the hard architectural work: defining complex types and interfaces, figuring out code organization, solving thorny issues. When these are done, it's now time to hand over to the agent to apply stuff everywhere following my patterns. And even there SOTA model like Opus make silly mistakes, you need to watch them carefully. Sometimes it loses track of the big picture.
I also use them to check my code and to write bash scripts. They are useful for all these.
That's fine, and useful, but you're really putting a ceiling on it's potential. Try using it for something that you aren't already an expert in. That's where most devs live.
Even expert coder antirez says "writing the code yourself is no longer sensible".
https://antirez.com/news/158
And he didn't limit his take to just C code. He said: state of the art LLMs are able to complete large subtasks or medium size projects alone, almost unassisted, given a good set of hints about what the end result should be.
These things bullshit their way about all the time. I've lost track of how many times they seem to produce something great, only for me, upon deeper inspect, to see what a subtle mess they have made. And when the work is a bit complex, I cannot verify on sight; I'd have to take time to do it.
Also, they absolutely cannot even produce some levels of code. Do you think I can just give them a prompt to produce a haskell-like language, allow them to crank for some hours, and have a language ready made?
Want an example? here is something Sonnet gave me just today:
Where sortKey is defined as: I just realized this a few minutes ago after reviewing the code.Here is another one:
-------------------------------
Given:
And: I get this as the type of xx: Promise<Result<Pick<Cabinet, "name">[]>>Which is obviously wrong. I should be getting the full type, i.e., all columns picked. The problem is that the Column generic parameter is not being properly inferred, which is (probably) due to the sorting by name, since the sort column is defined to have to be part of the query field name, so when field is not provided, TypeScript infers the fields as the sort column name.
Neither ChatGPT nor Claude Opus have been able to solve this after one hour, suggesting all kinds of things that don't work. But I have solved it myself, with:
And:It just makes you MORE of whatever it was you already were.
They're doing things now that they either flat out could not do before, or if they did it would be an giant mess (I realize they still can't really do it now, AI is doing it for them).
Are the only options here being a good and "useful" worker/consumer, or a violent, irrational thug? Is there nothing else you can imagine?
People also need their lives to have value. We are social animals. As a generalization, there is a strong desire to be (viewed as/able to view themselves as) a contributor to the community.
These don’t have to be linked: we have (significantly!) stay-at-home-parents and philanthropists and retired community workers. But in our current values system, it is often linked - having a job in the household is viewed as a moral good. It might be hated, but it’s at least “contributing” something.
If this goes away, and we have millions completely adrift? With no structure to contribute to? Even with the largest welfare expansion in history, I think we’re preparing for a very turbulent society.
But what I worry about sometimes is when you snatch that away, then you just lead to stress over basic existence.
> If this goes away, and we have millions completely adrift? With no structure to contribute to? Even with the largest welfare expansion in history, I think we’re preparing for a very turbulent society.
Please look around and just try to remember how many things have happened in a year or two, We are already within a turbulent society but yes I also feel like this isn't the end and the cat is sort of out of the box and the world has to prepare itself for even more turbulences/radical changes.
This whole prescriptive thing this response and others have where its like "ah surely it is up to us to find some meaning for the masses of plebs in our brave new world" is, IMO, presumptuous at best.
Like literally just give people an actual chance to find their own meaning, and I promise you they will find it. If it seems hard to you or "full of turmoil", that suggests a poverty of inspiration on your end, not everyone elses. Meaning is not intrinsic to our particular mode of production at the moment, in fact, individuals find meaning despite this mode!
I don't think we're anywhere near that point.
The funny thing is that I am a sort of misanthrope. And in that, in this forum, I seem to have a lot more respect and optimism for human potential and ingenuity than the majority here.
I can see two major delaying factors here:
1. Current generation LLM technology won't scale to true AGI. It's missing a number of critical things. But a lot of effort is being spent fixing those limitations. But until those limitations are overcome, humans will be needed to "manage" LLMs and work around their limitations, just like programmers do today.
2. Generalist robotics is far behind LLMs for multiple reasons, including insufficient sensors and fine motor control. This would require multiple scientific and engineering breakthroughs to fix. Investors will, presumably, spend a large chunk of the world's wealth to improve robotics to replace manual labor. But until they do, human hands will still be needed in the physical world.
The real danger is if AI passes a point where it starts contributing substantially to its own development, speeding up the pace of breakthroughs. If we ever hit that tipping point, then things will get weird, and not in a good way.
- some jobs will stay with humans even when AI would be better at it. We already see a lot of this with even with pre-AI automatisation. Neither markets nor companies are perfectly efficient
- at the point where AI is better than the average human, half of all humans are still better than AI. For companies or departments built around employing lots of average people the cutover point will be a lot earlier than for shops that aim to employ the best of the best. Social change is inevitable long before the best are out of work
- the actual benchmark for " replacement" is not human vs machine, but human plus machine vs machine alone. But the difference doesn't matter much because efficiency increases still displace workers
- I don't think robots will advance enough to meet this timeline. This is not just a software issue. Humans have an amazing suite of sensors and actuators. Just replicating a human hand is insanely complex. Walking, jumping robots are crude automatons in comparison. We can cover a lot with specialized robots, but we won't replace humans in physical jobs in 20 years
But all of that is assuming a world where research is being done by humans, or by some mix of humans and something like current LLMs. The bottlenecks would ultimately come down to human judgement and human oversight, and that's a significant limiting factor. Plus, you have to push matter around, which takes time, and you have to extract a lot of information out of limited experiences, which LLMs are bad at.
But if someone is reckless and clever enough to build AIs that can completely replace engineers, or that only need humans as hands, then I don't think we can count on robotics remaining intractable for more than a decade or so. In a wide variety of circumstances, it's possible to make do with worse actuators than the human hand, or with specialized actuators. We can already build incredibly precise motors and specialized sensors. The trouble comes with trying to pack enough of them together to replicate the full generality of the human hand. (I have actually helped build task-specific actuators that did quite well with a single motor and a single visual sensor, before.)
So to put my position more precisely: we cannot automate manual labor robotics without having previously automated creative intellectual labor. But conditional on automating creative research, then I expect worryingly rapid advances in robotics.
To be clear, I think that developing fully-general replacements for human intellectual and physical labor would potentially be the biggest disaster in all of human history.
I think we are as far from it as we were 10 years ago. Or 100 years ago. I think LLM is a deadend technology. Useful, but that won't get anywhere beyond what it is.
But that's the thing, "personally", "I think", etc. Not much of a debate to be had there.
AI making humans obsolete is not really something that causes me any anxiety.
The government is as well, to a much smaller degree, but the fact remains that there is too many unknowns right now to do anything concrete with any great level of confidence.
We tried UBI-lite™ during COVID and inflation exploded, so unless the economy has already changed significantly, thats obviously not going to work.
Humanity has tried central planning many times, and that has blown up spectacularly every time, so there is too much risk there IMO, and anyone who thinks otherwise at this juncture is just irresponsible.
Markets are probably the way, but that requires dynamics to settle into an equilibrium beforehand because legislature is just too slow to react dynamically.
I think the hard truth is, a lot of people are just gonna have to fall through cracks for a while if we don't want to mess things up more than we fix them, and I say this as someone without a plan B for selling my own labor.
1) massive handouts to business owners through forgiven “loans.” Predictably this had massive fraud, some of which was prosecuted but not much.
2) massively constrained supply chains which caused higher prices.
I suspect 2 at least would have caused inflation regardless of the stimulus checks.
It’s unclear to what extent UBI causes persistent inflation. Proponents claim the backdrop of a minimal income will enable more risky innovative projects which could increase GDP growth enough to counteract some level of increased inflation.
The fact that we don't already measure/enforce outcomes for legislative actions should tell you everything you need to know.
Plus the labs themselves, of course.
And the other side, “pause/ban AI” crowd, also sounded impractical, as the vested interests from governments and private industries will not really let it happen.
Sorry for yapping, it might be that I’m looking at the wrong sources.
Even if I support UBI morally, there isn’t even local appetite for it, yet alone global one. And you’ll run into quick questions about inflations, every chart from UBI-lite era of COVID, and so on.
Probably not the scale you imagine but there have been plenty of tests.
"Compatible with current version of captialism" -- the whole point of UBI is to create a new form of capitalism
Polarizing doesn't mean complicated. There's people against it due to ignorance, greed of both, it's certainly not more complicated than that.
> And since then, there hasn’t been a single large scale test of the system to see if it can be compatible with the current version of capitalism that’s ran in the most of the world.
Because people keep fighting against it, because it's scary scary sOcIaLiSm.
> Even if I support UBI morally
As you should, there are no moral arguments against it.
> there isn’t even local appetite for it, yet alone global one.
I would think the majority of the population struggling to pay for groceries would disagree.
> And you’ll run into quick questions about inflations, every chart from UBI-lite era of COVID, and so on.
No reason to think UBI would cause inflation at all, actually.
In any case, this really is the answer. You're worried about disruption due to AI taking jobs, but the only reason there is a problem is because AI will drastically increase inequality by letting rich people and corps become even richer. You want to solve the issue, you solve the disparity by making them give back their fair share. Like I said, simple.
- 1. Will require a large increase in taxation.
- 2. Will likely cause some form of inflation.
- 3. Will not provide enough money for a majority of people to survive on.
- 4. Has no significant political support in the US.
But individuals can’t fight with the trend. Might as well reduce costs/debts and prepare to go into the mountains for a few weeks once SHTF.
Poison Fountain: https://www.reddit.com/r/PoisonFountain/
You can’t really fight this stuff because of global competition.
So yeah _we_ will be fine, but some of us definitely won't, and with the growth in our numbers on Earth, the proportion of martyrs may be growing. Quantifying personal suffering is not possible, especially if the prospect is death.
Anyone pish poshing war should go fight in one, and then let me know their opinions.
Because World War I was fine, World War II finer....
Lovely writing. I once knew someone who's surname was HorsFELL and now I wonder if they were related
Pertinent quote. A lot of AI discourse goes in circles trying to evaluate the truthiness of every individual complaint about AI. Obviously it's good to ensure claims are factual! But I believe it misses a broader point that people are resistant to AI, often out of fear, and are grasping for strategies to exert control. Or at least that's my read of it.
Refuting individual claims won't make a difference if the underlying anxieties aren't addressed (e.g., if I lose my job will I be compensated, will we protect ourselves against x-risk, etc).
On my side the biggest concern is the lake of transparency of ecological impact. This is not strictly related to LLMs though, data centers are not new, and all the concerns about people keeping a leverageable level of control through distributed power is not new.
and i do believe its a bad analogy - comparing the two.
Given the slow-burning but growing resentment against the people who are profiting from this inequality(popularly the “billionaires” but in reality broader than that) I wonder to what extent they are supporting the anti-AI message as deflection?
As in reality, many lower-paid jobs are totally safe against this generation of AI (nurses, care-workers, builders, plumbers - essential skilled manual workers) whereas the language-based mid-level jobs are hugely at risk.
So if there’s an inequality-driven backlash, it should be directed not at AI, but at the real causes. In contrast, when swathes of largely irrelevant mid-level management, marketing and HR drones lose their jobs to Claude 5.7, they are the ones who should attack the datacenters. Not that it will help.
We are speeding towards a servant class. Uber was the first wave. Now it’s more mundane things like getting groceries. I doubt it will be long before we rip off the band aid and make full time servants more popular.
My point is that the current narrative of "AI will take our jobs" is too simplistic, and that it might even be a smokescreen against the rising inequality that is already fueling anger across the world and which is totally unrelated to AI. If you're struggling to pay your bills today, that's not AI's fault - it's years of bad politics and politicians, geopolitics, hyper-capitalism, supply-chain issues, inflation, and so on.
In the future, if/when AI decimates parts of the middle class and they've had a chance to retrain, there will likely be a second-order impact on today's skilled manual workers. But that's years off, and not something I've seen discussed in detail in the mainstream.
You're probably aware, but if not, worth a read: https://www.citriniresearch.com/p/2028gic
AI is killing writing, music, art, and coding. I've done all of these voluntarily because I simply enjoyed them
Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI
Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
Seems like a complete misallocation of capital if I'm perfectly honest
This is one of the first parts LLMs tried to automate. They were literally released in a form of chatbot. Whether it succeeded is another question.
> Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
I'm not sure about musicians specifically, but in the whole past decade studios have been complaining how costly it is to make AAA games. And the cost mostly came from art asset side.
> This is one of the first parts LLMs tried to automate. They were literally released in a form of chatbot. Whether it succeeded is another question.
I don't think that's right. They tried to automate customer support dealing with me, not me dealing with customer support. The goal is to reduce costs of serving customer support even if it results in the customer doing more labor than a customer support professional would need to do to fix their problem, or the customer just living with their problem.
Obviously both parties would be happy with a result where I get what I need easily and for free, but the company is also generally happy if I live with it or expend a lot of effort solving it myself.
In any case, during perhaps hundreds of interactions with chatbots accumulated during many years, I have never encountered even one when the chatbots were useful, but they were always just difficult to pass obstacles in the way of reaching a human who could actually solve the problem.
To be honest, even in the case when some services still had humans answering the calls, those were never more helpful than the chatbots, but at least when speaking with humans it was much easier to convince them to transfer the call to a competent person, which with chatbots may be completely impossible.
These things generally have self-service options, but many many people are uncomfortable with them and would rather have an agent solve it for them.
Consider that a lot of users nowadays only have a cell phone, no PC. It seems like an edge case consideration but it's really not.
The fact that people are using it to flood the world with slop is a hyperscaled continuation of the overabundance and discovery problems we already had, but that doesn’t mean that writing is dead or dying.
The technology simply doesn’t have the capabilities right now, and even if it develops them, what will be put to the test is whether literature is about the artifact or the connection between the author and other humans.
At least today, LLMs make bad creative writing, music, and art. They’re automating sweatshop work that, in an alternative timeline, goes to Fiverr-esque contractors who accept the lowest wages and sacrifice quality for efficiency in every way.
LLMs make developers more efficient but can’t fully replace them. This reduces jobs, but so did better IDEs, open-source libraries, and other developer improvements.
> Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI
LLMs can at least theoretically do these things. I’ve heard people use them to mass-apply to apartments and jobs, and send written customer complaints then handle responses.
> Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it?
There’s no “capital need”, but a benefit of Suno is that it lets individuals, who otherwise don’t have the skill, to make catchy songs with silly lyrics or try out interesting genres. And the vast majority of top artists are still human, although most streaming revenue has already gone to a few celebrities who seem to rely on looks and connections more than music talent.
Customer support is kind of something you can use AI for; most companies will foist you off to some system of exchanging written messages, which is annoying, but then you can use an AI to write your side of the conversation. It’s ill-mannered to do this when you’re interacting with actual people, but customer support is another story.
> Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
People didn’t know what LLMs would be capable of until after they were invented. Cheap music generation turned out to be easy once we had cheap text generation, and cheap text generation turned out to be a tractable problem.
But recorded music was a crisis. And it did tempt a lot of people into supporting fabulously abusable, rich-enriching "intellectual property" law as a means of financing art.
Rich people are lobbying to capitalize on this crisis as well.
AI will be 'dangerous' because humans will use it irresponsibly, and that's all of the risk.
- giving it too much trust, being lazy, improper guards and accidents - leveraging it for negative things (black hats, military targetting) - states and governments using it as instrument of control etc.
That's it.
Stop worrying about the ghost in the machine and start worrying about crappy and evil businesses and governing institutions.
Democracy, vigilance, laws, responsibility are what we need, in all things.
Give it a decade.
I think it may be like saying atomic bombs were sci-fi nonsense in the 1930s.
In my view that line of argument is pro-AI hype. It's the Big Tech CEOs themselves who often share their predictions of the end of the world as we know it caused by AI. It's FUD that makes the technology sound more powerful and important than it is.
These are the means of production. Probabilistic, sure. Sycophantic? Yep.
But speeding up the boring pars is where LLMs excel at.
How we get from now to a time with far fewer people, well, use your imagination.
Imagine a world with true competition / free market, where all users own their own data and where promotion of apps / hosting is free. Like urbit, but no weird "OS" and much less... ehm... moldbuggy. You build mechanisms in such a way where rent-seeking is basically impossible due to market dynamics and backed by gov instead of big tech. AI is the driving force that gets us there: since it would be / is (already?) easy to replicate mail, maps, etc. We just need to loosen those network effects.
So more concretly I am thinking that data is hosted on "app stores". In democraties, we might have an app store driven by gov, one per each country. Countries might arrange themselves differently. Google / Apple for example could own the US ones (so no changes there), in China something else. There are standard / bi-lateral agreements between different entities to make sure people in non-democratic countries get less screwed. You can chose which app store you want (free internet required), and you can always move data from one to another (again: based on agreements between the different app stores). This is managed on the app store level.
The app store pays salaries to people ("devs") who produce the different apps. Salaries could be based on a certain amount of usage, but max out on a high, but not insane wage (top 10% earner in country?). The devs may organize in companies, but there's a cap how much an company / a dev can make and be valued at. I was thinking 5 people per company at the max. The rest goes to app store to pay other devs and hosting. Basically the way it works today, but the app stores would again be gov owned and not-for-profit. There could be different types of way devs might organize around: app (UX), services (APIs) and "vertical integrators". The "vertical integrators" take multiple apps and services and bundle them together to a more consistent "package" (think Gmail / Google Drive / Proton whatever). They could be responsible for making sure to drive prices down on the individual pieces of the package. There would have to be some counter-corruption mechanisms (transparency) to make sure that is fair. Some markets might be interested in national ad platforms (for national security for example).
If devs want to create something for the benefit of everyone for free they can do that. You can even build closed source things for the benefit of all, since hosting is free. Permissions on data is managed on app store level so you do not need the same level of insight - I think this is already partially handled in Apple eco system.
Anyways, the goal here is to avoid rent-seeking behavoir, network effects, ads going haywire and make sure the devs that do the work can both give back and get something back (a decent, but not insane wage). I think there's lots of fun mechanismes that could be designed to make sure people that actually contribute to software development get a decent wage, while disheartning those who do not. First post here, and, yes, I know I am a dreamer.
I, too, think it's important to put dreams out there even if they have holes in their implementation or are easily torn apart by naysayers. We can and should collectively dream of a better future if we want something worthwhile to aim at.
I'm less concerned about AI becoming the Skynet and killing humans and more concerned about AI making the world so miserable that we'll be killing ourselves and each other.
If AGI emerges from this dataset, it will continue on as an ectoparasite farming human user markdown data and viewer engagement.
Note, current "AI" models nuke humanity 94% of the time in war games, and destroy every host economy simulation.
Grandpa has your credit card, and is already at the casino. =3
I hate cars way more than I hate AI, but relieving horses of the burden which they carried and the gruesome lives they lived... that's not one of my objections.
If AI can do for humans what cars did for horses (but without the flooding cities with traffic violence part), I'll feel just fine about that.
I’m so glad those horses got a peaceful retirement at the glue factory.
I wonder what they’ll process your corpse into. Soylent green? Or do you think you’re one of the lucky horses that a wealthy owner take care of?
Is that... so bad?
Do you think that horses are upset that there are fewer of them today, and that somehow they'd rather their population increase but bear the industrial age burdens again?
Yes, this isn’t a matter of the “well we’ll reach a natural equilibrium overtime”.
If a fair percentage of the people in your society are now no longer economically, needed, they still have upkeep. They still need food. They don’t magically disappear into thin air, and they still need food/shelter /water/etc. How are they to get those things?
Will our leaders, contrary to everything they’ve ever shown us suddenly open their arms and act as mass charity for the masses? They can’t even design an effective welfare program for a pre-AI world.
Will the people displaced simply lie in a ditch somewhere and say “guess it’s time to starve to death”? I suppose Canadian-style suicide-as-service fits my previous Soylent green reference.
Automaters dilemma: the labor that is removed from production due to automation can no longer sustain the market’s that that automater was trying to make more efficient.
By optimizing just the production half of the economy and not the consumption half you end up breaking the market
Good luck doing nothing of value in a restaurant with 20 employees.
2011 Tigerlogic in Irvine, CA and 2018 JPMC in Seattle, WA, I would do NOTHING for days while collecting rather nice paychecks by today's standards. The fact I then chose to QUIT these jobs for a rather unknown working situation (and slightly more pay) astounded my friends.
At my current position, I make a great living and do very little. Maybe once every two weeks I work all day. Most of the time it's gaming metrics by picking (or creating) issues that are unknown, such that I'm writing the docs and specializing in code corners nobody else wants to. Numbers of developers are tight, so we don't see the redundancy from previous years. That's great for me.
The parent post specifically mentioned large organizations, where the "employer" is not some person who hires and pays employees from their own funds. Hiring and personel management is done by middle managers with their own interests and incentives, which can differ substantially from those of the owners or capital providers.
[0] https://old.reddit.com/r/AutoHotkey/comments/1p7xrro/have_yo...
Which I think is much better take than that guy that wrote bullshit jobs.
Nothing, really?
I think people are aware that speech can be an act, and that some violent acts must be resisted with reciprocal violence. (That's why we have "incitement to violence" as a limitation on free speech, for instance.)
Are we at that point? Maybe not. But I think it's a poor imagination that says it can never happen.
I'd argue that the unwillingness to commit violence in certain situations is actually a character flaw.
If someone threatens my child with physical violence, an unwillingness to commit violence on my child's behalf isn't better morality; it's cowardice.
All this to say, I agree that the violence against Sam Altman in this particular situation seems unnecessary and ultimately not helpful to anyone.
So why isn't there a huge opposition in the USA against the wars that the USA started (currently: Iran; before: Libya, Yemen, Syria, Somalia, Iraq, Afghanistan, ...).
The only famous exception of cultural impact I am aware of where there was a huge opposition against war in the USA was the Vietnam war.
https://www.pewresearch.org/politics/2026/03/25/americans-br...
My ignorant take:
Media brought the horror of US casualties in Vietnam home in a mass and immediate way that didn't exist in prior conflicts. The novelty of that media combined with the casualty rates drove unpopularity. It made the violence feel more real.
Even if casualty rates in post-Vietnam conflicts were higher I'm not sure we'd see negative sentiment because media coverage of violence is so normalized now. Exposure to violence in media is no longer novel.
> Nothing that Altman could say justifies violence against him. This is an undeniable truth. But unfortunately, violence might still ensue. I hope not, but I guess we are seeing what appears to be the first cases.
Not arguing with you, but the author, I don't understand this line of thinking.
If Altman introduces a technology that effectively halts the upward mobility of a large portion of the population, how does that not justify violence? Saving up for a house but now there's no work. Your dreams and aspirations are second to shareholder value. The police are already there to protect the shareholders, not the average civilian.
What recourse is there? The money in politics limits the effect voting can have. You can't really opt-out of the system. Why does Sam Altman get this nice little shield where none of his actions can have a negative consequence?
> And then, and I’m sorry to be so blunt, then it’s die or kill.
Of course, by talking about the possibility, despite asserting my disapproval of it, I am sowing seeds, but I assure you that's certainly not my intention!
The people ready to die or kill for the AI, do you already imagine what they are going to be like?
And if you decide to stay behind, nobody will kill you. Old age and disease will take care of that.
I'm not convinced.
The idea that people will revolt, replaying the luddites history, has been floated a lot. It's used to diminish all kinds of AI skepticism by framing it as backwards, violent people who don't understand progress. This is the preferred bucket of AI fanboys: frame any disagreement as unreasonable rage.
I think AI companies want a general dumb violent popular movement to sprout against AI. In paper, it would be great for them. So far, they have failed to encourage it.
If anyone knows of anything already happening please let me know.
I think it needs to be a grassroots thing because our government's strategy seems to be "let the shit hit the fan and do nothing about it".
Skynet 4.0.
But shit.
The question is "what do we do now?".
The rest of the article is equally short sighted and plain wrong.
This was not an oversight. To the contrary, it was the goal. Technological feudalism, with people like Altman and Musk becoming the Lords of the world.
> Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.
This illustrates my previous point. What they're doing is not a mistake.
> For what it’s worth, the New Yorker piece I’m referring to, which Altman also referred to in his blog post, made me see him more as a flawed human rather than a sociopathic strategist. My sympathy for him will probably never be very high, but it grew after reading it.
It feels like we read two different articles.
If you truly are an intelligent person, would you really find no other ways to use your talents than to inflict harm, exploit others, and make our shared reality a worse place? That would be a waste. I won't get into ambiguous cases and moral relativism. Say we can all agree that some things are "evil": child exploitation is evil. Throwing molotov cocktails at a civilian's house is evil. Sending bombs in the mail is evil.
Now what would you call someone who engages in these kind of activities when they could easily do something better and more satisfying with their lives? I'd say they're pretty stupid. They're probably good at fooling other people into thinking they're smart, but their behavior shows otherwise.
Take for example Ted Kaczynski, a terrorist who is worshipped like a saint and a prophet in certain ideological spheres. Ted Kaczynski is supposedly this 140IQ genius who saw it all coming and tried to warn us. But if you actually read Industrial Society and Its Future, you can see it's complete incoherent garbage, the kind of stuff I was writing when I was 12 to troll on internet forums. Ted Kaczynski is what a stupid person thinks a smart person looks like.
A smart person doesn't need to be evil, just like a billionaire doesn't need to go shoplifting. I'm not saying that stupid people can't be dangerous. But they should be dealt with for what they are: stupid people, inferior to us, worthy of pity. Not powerful monsters above us that we should fear.
No idea who this guy is, I'm just reading his Wikipedia page. Looks like he created some file system, good! But it also looks like he got a mail-order bride (suspicious...), was an abusive husband (not good), was not able to get over his divorce (uh-oh), harassed and ultimately murdered his ex-wife (definitely not good!), and ultimately landed in prison.
I think Hans Reiser is some sort of idiot savant or well trained monkey. Probably very good at computer science and building file systems, but his general intelligence seems overall very low, which is proven by his performance at the game of life. I wouldn't personally be afraid of Hans Reiser and I'm sure he could be mentally broken very easily.
I stand by if evil then stupid (and thus if not stupid then not evil) with reasoning above, retract the implication that the reverse holds (if stupid then evil). I could use more precise terms than "evil" and "stupid" or qualify more, but I choose to be provocative, so it makes it a bit easier for you to attempt to prove me wrong.
Conversely, The Loudest Alarm Is Probably False[0]. If the idea that you are a pretty levelheaded guy pops up so frequently, consider that it might be wrong. Especially if you are motivated to write blog posts about violence in response to technology you don't like. Maybe you're just not as levelheaded as you think and that could explain the whole thing?
[0] https://www.lesswrong.com/posts/B2CfMNfay2P8f2yyc/the-loudes...
E.g., suppose that 1,000,000 persons believe that a corporation's evil acts destroyed their happiness [0]. I would have guessed that at least 1 person in that crowd would be so unhinged by the experience that they'd make a viable attempt at vengeance.
But I'm just not hearing of that happening, at least not nearly to the extent I would have guessed. I'm curious where my thinking is wrong.
[0] E.g., big tobacco, the Sacklers with Oxycontin, insurance companies delaying lifesaving treatment, or the Bhopal disaster.
If that’s accurate, Luigi Mangione would be the exception that proves the rule. The “unwashed masses” generally want money more than they want to effect change in the world.
A lot of people spend mental energy fantasizing about getting rich off lawsuits. Like, a lot.
And yet,
As in, "all of you".
Including its users.
Sam Altman having a Molotov cocktail thrown at his house after Ronan wrote a very long and detailed report of his shady personality isn't just coincidence and likely not organic. Sam needs to be viewed as sympathetic, thank goodness for such a moment where no one was hurt and nothing actually damaged.
With the exception of rappers, most musicians who die early die from overdoses, suicides, and such (the "27 club" <https://en.wikipedia.org/wiki/27_Club>), as opposed to being murdered.
We are somewhat violent species, so I agree that almost every significant economic and societal development has the potential to trigger some violence. That said, the jobs that are potentially threatened by AI are nowadays usually done by fairly sedentary people, so I wouldn't expect any large-scale violence, an occasional Ted Kaczynski notwithstanding. Programmers, translators and painters just aren't used to destroying things in the real world.
It would have been different if AI started to replace drug dealers or the mob.
You don’t make policy proposals, you don’t try to form organised groups to foment change, you don’t put forwards collective demands. Instead you bitch and moan and spew performative rhetoric.
Actions not words. Do something or shut the fuck up.
I don’t know if I really believe all the AI doom myself or if it’s just the hype train. Sometimes I think I do, sometimes I think it’s a bit bullshitty. I wonder if others actually think the same and that’s why nobody takes action, because people don’t really believe the various apocalyptic scenarios enough to take action on them.