Don't underestimate how anti-AI the tabletop community is. This could have been entitled: "Games Workshop elects not to experience multi-year headache. Will use AI when profitable."
I don't do much with crypto/NFTs/AI, because I don't find any of it useful yet. But I get so much "with us or against us" heat for not being zealously against the the idea of them. It was NFTs, NFTs, NFTs at the table for months until it became AI, AI, AI. My preference is to talk about something else while playing board games.
One thing I've found when talking to non-technical board gamers about AI is that while they’re 100% against using AI to generate art or game design, when you ask them about using AI tools to build software or websites the response is almost always something like "Programmers are expensive, I can't afford that. If I can use AI to cut programmers out of the process I'm going to do it."
A minority are conflicted about this position.
When I talk to technical people at game nights we almost never talk about tech. The one time our programmers all played RoboRally the night kind of died because it felt too close to work for a Saturday night.
If GW was going to use AI they would probably start with sprue layouts. Maybe the AI could number the bits in sane way? I would be for that.
> while they 100% against using AI to generate art or game design, when you ask them about using AI tools to build software or websites the response is almost always something like "Programmers are expensive, I can't afford that. If I can use AI to cut programmers out of the process I'm going to do it."
Three things:
1. People simply don't respect programming as a creative, human endeavour. Replacing devs with AI is viewed in the same way as replacing assembly line workers with robots.
2. Somewhat informed people might know that for coding tasks, LLMs are broadly trained on code that was publicly shared to help people out on Reddit or SO or as part of open-source projects (the nuance of, say, the GPL will be lost). Whereas the art is was trained on is, broadly speaking, copywritten.
3. And, related to two: people feel great sympathy for artists, since artists generally struggle quite a bit to make a living. Whereas engineers have solid, high paying white collar jobs; thus, they're not considered entitled to any kind of sympathy or support.
I've been a professional artist, designer, and developer. Mostly a developer, and working in academia throughout the late teens meant being privy to the development of neural networks into what they've become. When I pointed out the vulnerability of developers to this technology, the "well maybe for some developers but I'm special" stance was nearly ubiquitous.
When the tech world realized their neato new invention inadvertently dropped a giant portion of the world's working artists into the toilet, they smashed that flusher before they could even say "oops." Endless justification, people saying artists were untalented and greedy and deserved to be financially ruined, with a heaping helping of "well, just 'pivot'."
And I did-- into manufacturing because I didn't see much of a future for tech industry careers. I'm lucky-- I came from a working class background so getting into a trade wasn't a total culture and environment shock. I think what this technology is going to do to job markets is a tragedy, but after all the shit I took as a working artist during this transition, I'm going to have to say "well, just pivot!" Better get in shape and toughen up-- your years of office work mean absolutely nothing when you've got to physically do something for a living. Most of the soft, maladroit, arrogant tech workers get absolutely spanked in that environment.
> Whereas the art is was trained on is, broadly speaking, copywritten
The overwhelmingly vast majority of the code you're talking about (basically, anything that doesn't explicitly disavow its copyright by being placed in the public domain, and there's some legal debate if that is even something that you can do proactively) is just as copyright protected as the art is.
Open Source does not mean copyright free.
"Free Software" certainly doesn't mean copyright free (the GPL only has any meaning at all because of copyright law).
> 1. People simply don't respect programming as a creative, human endeavour. Replacing devs with AI is viewed in the same way as replacing assembly line workers with robots.
It is about scarcity: art is a passion; there is a perpetual oversupply of talented game designers, visual graphic artists, sculptors, magna artists, music composers, guitarists, etc...you can hire one and you usually can hire talent for cheap because...there is a lot of talent.
Programmers are (or were?) expensive because, at least in recent times, talented ones are expensive because they are rare enough.
A good artist is just as expensive as a good programmer. Commissioning art is expensive. Outsourcing to third world countries is cheaper (just like programming!).
> A good artist is just as expensive as a good programmer.
Let's look at industry, and just go look at what video game artists make compared to programmers with a similar amount of experience. Now, are you just claiming that they just aren't very good artists, so they aren't paid well? Because I've seen their work, and its not shabby at all.
Video game companies are a special case (even for programmers). They work people to the bone for lower pay because people are passionate about video games, but the common denominator there is gamers wanting to get into the industry—not being an artist or programmer.
People don't respect the salary premium software developers have received and expect relative to other creative, human endeavors.
You lay it out perfectly in your answer, and I'll add that the entire non-tech world generally feels that if tech jobs lose their shine due to AI, its actually a welcome reversion to the mean. Software has likely depressed wage growth in many other jobs.
Most of the code that was publicly available to be trained on is written by people in their spare time, not directly making any money off of it though. Personally I think if you are fine with AI used to generate code you should also be fine with it being used to generate art. That doesn't mean that I think that big companies just scraping the entire internet and training on large amount of portfolio pieces from ArtStation or people making open source projects is good either.
> Most of the code that was publicly available to be trained on is written by people in their spare time, not directly making any money off of it though.
So what? The code is offered under specific licensing terms. Not adhering to those terms is just as wrong as training on a paid product.
There is the nuance that much code that is available publicly (which includes a GIGANTIC amount of that "written by people in their spare time" stuff) is put there for the explicit goal of showing other people all the details so they can read, reuse, and modify it. Open-source licenses in some form are incredibly popular, though the details vary, and seeing your side project in a product that 100k people use is usually just neat, not "you stole from me".
Artworks have their relatively-popular creative-commons stuff, and some of those follow a similar "do whatever" vibe, but I far more frequently see "attribution required" which generally requires it at the point of use, i.e. immediately along-side the art-piece. And if it's something where someone saw your work once and made something different separately, the license generally does not apply. LLMs have no way to do that kind of attribution though, and hammer out stuff that looks eerily familiar but isn't pixel-precise to the original, so it feels like and probably is an unapproved use of their work.
The code equivalent of this is usually "if you have source releases, include it there" or a very few have the equivalent of "please shove a mention somewhere deep in a settings screen that nobody will tap on". Using that code for training is I think relatively justifiable. The licenses matter (and have clearly been broadly ignored, which should not be allowed) but if it wasn't prohibited, it's generally allowed, and if you didn't want that you would need to choose a restrictive license or not publish it.
Plus, like, artists generally are their style, in practical terms. So copying their style is effectively impersonation. Coders on the other hand often intentionally lean heavily on style erasing tools like auto-formatters and common design patterns and whatnot, so their code blends cleanly in more places rather than sounding like exclusively "them".
---
I'm generally okay with permissive open source licensed code being ingested and spat back out in a useful product. That's kinda the point of those licenses. If it requires attribution, it gets murky and probably leans towards "no" - it's clearly not a black-box re-implementation, the LLMs are observing the internals and sometimes regurgitate it verbatim and that is generally not approved when humans do it.
Do I think the biggest LLM companies are staying within the more-obviously-acceptable licenses? Hell no. So I avoid them.
Do I trust any LLM business to actually stick to those licenses? ... probably not right now. But one could exist. Hopefully it'd still have enough training data to be useful.
> People simply don't respect programming as a creative, human endeavour.
Because it's not? Programmers' ethos is having low attachment to code. People work on code together, often with complete strangers, see it modified, sliced, merged and whatever. If you rename a variable in software or refactor a module, it's still the same software.
Meanwhile for art authorship, authenticity and detail are of utter importance.
That's no different from any art. It's like saying that woodworkers' ethos is having low attachment to screws, or guitarists' ethos is having low attachment to picks. Code is a tool; the creative, human endeavor is making an artifact that people can perceive and interact with.
> 1. People simply don't respect programming as a creative, human endeavour. Replacing devs with AI is viewed in the same way as replacing assembly line workers with robots.
Very reminiscent of the "software factory" bullshit peddled by grifters 15 or 20 years ago.
And I think, frankly, a lot of agile practice as I've seen it in industry doesn't respect software development as a creative endeavour either.
But fundamentally I, like a lot of programmers/developers/engineers, got into software because I wanted to make things, and I suspect the way I use AI to help with software development reflects that (tight leash, creative decision-making sits with me, not the machine, etc.).
> Games Workshop elects not to experience multi-year headache. Will use AI when profitable.
Indeed, companies will always start using something if it makes financial sense for them.
> One thing I've found when talking to non-technical board gamers about AI is that while they 100% against using AI to generate art or game design, when you ask them about using AI tools to build software or websites the response is almost always something like "Programmers are expensive, I can't afford that. If I can use AI to cut programmers out of the process I'm going to do it."
This is because they don't view programming as a "creative" form of labor. I think this is an incorrect view, but this knowledge is at least useful in weighting their opinions.
The most interesting observation is that regardless of how "anti-AI" most people seem to be, it isn't that deep of an opinion. Their stated preference is they don't want any AI anywhere, but their revealed preference is they'll continue to spend money as long as the product is good. Most products produced with AI, however, are still crap.
That’s the thing. One day everyone is going to just stop caring about being anti-AI. Already I’ve noticed that most people are only against other people’s use of AI. Their use is justified.
I actively don’t use AI because the results are unreliable or ugly. I’m just not against AI in principle. It’s funny that my position is considered contemptible by people who regularly use AI but are hard hardliners against it on moral grounds.
Remember when everything wasn’t a religious war? Actually, I don’t. It was always like this and it’s always going to be like this. Just one forever crusade after another.
I am going to sound cynical, but I strongly believe that everyone's view on AI is contaminated by ulterior motives, and a lot of people are not truthful with themselves about their positions on AI. For instance, I feel as though topics such as copyright, environmentalism, water use, etc., that have been thrust into the limelight are being pushed by people who didn't care about these issues 5-10 years ago, but decided to start clutching their pearls about it now. Particularly copyright; everyone was so okay with pirating movies, apps, music when it benefited them, but now they are the vanguard in enforcing other people's copyright on data they don’t even own.
> everyone was so okay with pirating movies, apps, music when it benefited them, but now they are the vanguard in enforcing other people's copyright on data they don’t even own.
You do not mention the perception of asymmetric legal and market power. Many people think that file sharing Disney movies is ok, but Google scraping the art of independent artists to create AI is not ok. That is not the same dynamic at all as not caring about copyright, and then suddenly caring about copyright.
Suddenly people change their tune, what gives? All we are talking about is the forced wealth transfer of trillions of dollars to the richest megacorps on the planet.
Most people didn't choose to be part of your moon shot death cult. Only the people at the tippy top of the pyramid get golden parachutes off Musk's exploding rocket.
They never changed their position, corpos shouldn't get any money! That's always been the position. They are inherently unethical meat grinders.
> Indeed, companies will always start using something if it makes financial sense for them.
I agree that this is often the case. I still see Games Workshop as an exception. They could have moved plastic production to a cheaper region (e.g. China), but they haven't done so. Financials are obviously important to them, but they're being very careful and thoughtful about their actions. This AI ban is just another showcase of that.
The UK production is mostly about speed (turnaround from 3d prototype, to mold, to finished sprue, and ‘Eavy Metal painted promo images) and quality control for the models. All of their paper and hard plastic products (books, dice, etc) are produced in China.
> The most interesting observation is that regardless of how "anti-AI" most people seem to be, it isn't that deep of an opinion. ... Most products produced with AI, however, are still crap.
how can you go and generalize about these people, calling them idiots (that's what "deep of an opinion" means, even if you don't say that), and then breathlessly engage in the exact same rhetoric?
> One thing I've found when talking to non-technical board gamers about AI is that while they’re 100% against using AI to generate art or game design, when you ask them about using AI tools to build software or websites the response is almost always something like "Programmers are expensive, I can't afford that. If I can use AI to cut programmers out of the process I'm going to do it."
I had a conversation with an artist friend some time back. He uses Squarespace for his portfolio website. He was a few drinks in, and ranting about how even if it's primarily artists using these tools professionally at the moment, it'll still lead to a consolidation of jobs, how it's built on the skillset and learning of a broader community than those that will profit, etc. How the little guy was going to get screwed over and over by this sort of thing.
I started out doing webdesign work before I moved more to the operations and infrastructure management side of things, but I saw the writing on the wall with CMS systems, WYSIWYG editors, etc. At the time building anything decent still took someone doing design and dev work, but I knew that they would get better over time, and figured I should make the change.
So I asked him about this. I spoke about how yeah, the people behind Squarespace had the expertise - just like the artists using AI now - but every website built with it or similar is a job that formerly would have gone to the same sort of little guy he was talking about. How it's a consolidation of all the learnings and practices built out by that larger community, where the financial benefits are heavily consolidated. I told him it doesn't much matter to the end web designer whether or not the job got eliminated by non-AI automation and software or an LLM, the work is still gone and the career less and less viable.
I've had similar conversations with artists before. They invariably maintain that it's different, somehow. I don't relish jobs disappearing, but it's nothing new. Someday, maybe enough careers will vanish that we'll need to figure out some sort of system that doesn't involve nearly every adult human working.
Yes, anyone with an art-adjacent hobby like tabletop gaming is militantly anti-AI.
Shelling out to support artists is seen as virtuous, and AI is seen as the opposite of that - not merely replacing artists but stealing from them. There's also a general perception that every cost-saving measure is killing the quality of the product.
It's not "anti-AI" to acknowledge the fact that when your job is to create work for hire in order to build up your employer's IP portfolio, being paid to use AI to create work that isn't IP isn't doing your job.
Your job is to create IP. As per the US Copyright Office, AI output cannot be copyrighted, so it is not anyone's IP, not yours, not your employer's.
That's not "anti-AI", that's AI and copyright reality. Game Workshop runs their business on IP, suddenly creating work that anyone can copy, sell or reproduce because it isn't anyone's IP is antithetical to the purpose of a for-profit company.
> "Games Workshop elects not to experience multi-year headache. Will use AI when profitable."
They will definitely start using AI when their competitors do to the point that they gain a substantial competitive advantage. Then, at least in a free market, their only choices are to use AI or cease to exist. At that point, it is more survival bias (companies that used AI survived) rather than profit motive (companies used AI to make more money).
I can guarantee you that there are more than a few small producers in Guangzhou that can, and are using whatever advantage they can leverage (including AI, like the rest of China's industry).
* deprecating people's models so that they have to buy new ones
* making any number of rules changes that were widely hated
* making lore changes that were widely hated
They aren't going to lose customers because some other company is using AI. They effectively don't have any competition, because people love the Warhammer settings and want to play games set in them.
> Then, at least in a free market, their only choices are to use AI or cease to exist.
That is a false dichotomy. Eschewing AI may actually provide a competitive advantage in some markets. In other words, the third choice is to pivot and differentiate.
Does GW have competitors? Feels like they own their niche (with the IP associated) completely with extreme amounts of content.
Similar to how Magic rules their segment of the market
GW don't have competitors, it has an absolute monopoly on the 40k and Fantasy worlds it has built up. It's like saying there's competitors to LOTR or Star Wars or DnD.
Their worlds are their monopolies. Worlds that now have multi-decades worth of lore investment (almost 50 years now I think).
Just because someone else can make cheaper little plastic models doesn't affect GW in the slightest. Or pump out AI slop stories.
The Horus Heresy book series is like 64 books now. And that's just a spin-off. It's set way before when 40k actually is set (10,000 years).
With so much lore they need complicated archiving and tracking to keep on top of it all (I happen to know their chief archivist).
You can't replace that. I only say all this just to try and explain how off the mark you are on understanding what the actual value of the company is.
I live in Nottingham where GW is based, another of my friends happens to have a company on an industrial estate where there are like 3 other tabletop gaming companies. All ex-gw staff.
You could probably fit all their buildings in the pub that GW has on its colossal factory site.
You used to know people who worked at Boots, which used to be the big Nottingham employer. Now days, I know more people who work at GW.
BattleTech is somewhat of a competitor, and a variety of smaller games have some niches.
Plenty of people use proxies, too. There's places that do monthly packs of new STLs that could be an entire faction army, and there's long been places that sold "definitely not Space Marines and Sisters of Battle" minis too.
They don't have a threat of anyone overtaking them at current, but AI making alternatives in this vein even cheaper could eat away at portions of their bottom line.
> ... while they’re 100% against using AI to generate art or game design, when you ask them about using AI tools to build software or websites ...
And this is not complicated at all. It's the quality of output.
Users appreciate vibecoded apps but developers are universally unfazed about vibecoded pull requests. Lots of same devs use AI for "menial" tasks and business emails. And this is NOT a double standard: people are clearly ok when generative AI outputs may exist but aren't exposed to unsuspecting human eyes, and it's not ok if they are exposed to human eyes, because the data AIs generate haven't exceeded the low-side threshold of some sort. Maybe SAN values.
(also: IIUC, cults and ponzi scheme recruitment are endemic in tabletop game communities. so board game producers distancing from anything hot in those circles, even if it were slightly irrational to do so, also makes sense.)
Don't assume your experience is uniformly distributed. I know tabletop gamers addicted to AI and 3D printing their own game pieces.
I would describe them as anti-corporate IP/copyright cartel. They understand things like automobiles and personal computers require organized heavy lifting but laying claim to own our culture and entertainment, our emotional identity is a joke.
Just rich people controlling agency, indoctrinating kids with capitalist essentialism; by chance we were born before you and survived this long so neener neener! We own everything!
I doubt a random internet commenter can persuade you, but LLMs and tools built around them are fundamentally different from NFT/crypto.
NFTs/Crypto are just ways to do crimes/speculate/evade regulations. They aren't useful outside of "financial engineering." You were right to dismiss them.
LLMs are extremely useful for real world use cases. There are a lot of practical and ethical concerns with their use: energy usage, who owns them, who profits from them, slop generation, trust erosion... I mean, a lot. And there are indeed hucksters selling AI snake oil everywhere, which may be what tripped off your BS meter.
But fundamentally, LLMs are very useful, and comparing them to NFT/Crypto does a disservice to the utility of the tech.
> They aren't useful outside of "financial engineering."
Without disagreeing with your overall point in 99% of cases, we did actually have a good use for pinning things in the Bitcoin blockchain when I worked at Keybase. If you're trying to do peer-to-peer security, and you want to prove not only that the evil server hasn't forged anything (which you do with signatures) but also that it hasn't deleted anything legitimate, "throw a hash in the blockchain" really is the Right Way to solve that problem.
> If you're trying to do peer-to-peer security, and you want to prove not only that the evil server hasn't forged anything (which you do with signatures) but also that it hasn't deleted anything legitimate, "throw a hash in the blockchain" really is the Right Way to solve that problem.
and it only requires the same electricity as a medium sized country to do it
The property that makes the blockchain useful for this, though, is that it's widely-distributed. "Throw a classified in the national newspaper" is just as good. Nowadays, we have better solutions (appendable BitTorrent comes to mind), with most of the advantages of blockchain but few of the disadvantages.
I was in management i probably also wouldn’t like my designers to use AI. I pay them good money to draw original pieces and everyone can tell and it looks generic when AI is used. I’d want my moneys worth
Sure and limit yourself at the starting point, people underestimate how much limiting these tools are, they're trained a on a fixed set can only reproduce noise from here and there
> they're trained a on a fixed set can only reproduce noise from here and there
This anti-AI argument doesn't make sense, it's like saying it's impossible to reinvent multiplication based on reading a times table. You can create new things via generalization or in-context learning (references).
In practice many image generation models aren't that powerful, but Gemini's is.
If someone created one that output multi-layer images/PSDs, which is certainly doable, it could be much more usable.
If image generation is anything like code generation then AI is not good at copying layout / art style of the coder / artist.
Using Visual Studio, all the AI code generation is applying Microsoft's syntax style and not my syntax style. The return code line might be true but the layout / art / syntax is completely off. This with a solution that has a little less than one million lines of code, at the moment, which AI can work off of.
Art is not constant. The artist has a flow and may have an idea but the art will change form with each stroke with even removing strokes that are not fitting. I see as AI generated content lacks emotion from the artist.
Image generation is nothing like AI code generation in this regard. Copying artist style is one of the things that is explicitly quite easy to do for open-weight models. Go to civitai and there are a million LORAs trained specifically on recreating artist style. Earlier on in the Stable Diffusion days it even got fairly meanspirited - someone would make a lora for an artist (or there would be enough in the training data for the base model to not need it) and an artist would complain about people using it to copy their style, and then there would be an influx of people making more and better LORAs for that artist. Sam Yang put out what was initially a relatively tame tweet complaining about it, and people instantly started trying to train them just to replicate his style even more closely.
Note, the original artist whose style Stable Diffusion was supposedly copying (Greg someone, a "concept art matte painting" artist) was in fact never in the training data.
Style is in the eye of the beholder and it seems that the text encoder just interpreted his name closely enough for it to seemingly work.
For an artist, the starting point is blank page, followed by a blur of erased initial sketches/strokes. And, sources of inspiration are still a useful thing.
Well, you can definitely make AI art much less obvious with the right tweaking (directly running models, blending different sub-models, etc). The bigger issues from a professional perspective are liability concerns and then, even if you have guaranteed licensed sources, the impossibility of controlling fine details. For a company like GW it's kind of pointless if it can't reflect all the decades worth of odds and ends they've built both the game and the surrounding franchise around.
At the same time, the 3D printing community is very much embracing AI as a means to circumvent price-gouging behavior by GW in particular. The popular STL slicer Lychee just recently added a generator tool at https://3dgen.lychee.co/ that has seen both massive protests from hobby community idealists and, as it's still around, likely a lot of adoption by the less vocal pragmatists.
We'll have to see how this plays out. Games Workshop is (supposedly) notoriously litigous, and they've gone after artists who get too close to their art style. AI models are trained on that, so this is going to be an interesting thing to monitor.
It's not as fancy as that but I've had luck simply asking an LLM to generate an OpenSCAD file.
I wanted to adapt my bike balancer to an unusual sized wheel and simply measured the diameter of the rod and the outer diameter of the hole where the axle slides into, asked an LLM to produce an adapter, converted to an STL and hit print and got a fully functional 3D printed part. Felt like living in the future, maybe one step away from owning a Star Trek Replicator.
You mean a technology primarily created by stealing content without attribution is being embraced by people that want to steal without contribution? Shocking!
Good for them, it's nice to see some management that hasn't totally bought into this "no workers, only subscription AI bots" vision of the future that so many tech CEOs are selling.
Personally I would never pay for tabletop miniatures or lore books generated by AI. It's the same core problem as publishing regurgitated LLM crap in a book or using ChatGPT to write an email - I'm not going to spend my precious time reading something that the author didn't spend time to write.
I am perfectly capable of asking a model to generate a miniature, or a story, or a dumb comment for reddit. I have no desire to pay a premium for someone else to do it and get no value from the generated content - the only original part is the prompt so if you try to sell AI generated "content" you might as well just sell the prompt instead.
I seee companies making statements like these (LArian and others) that must be afraid of the reaction from their customers if they decided they would use AI will eventually come to regret it. There will be other companies that do what they do better and faster because they leverage AI as part of the process, and I believe very soon the backlash against AI will disappear as people begin using products with AI that are really very good and they will jsuit stop caring / forget they had an issue with it in the first place as they watch their friends and others who dont care enjoying themselves regardless.
AI companies would love that. Just as oil companies love it that climate change is still debated throughout society. Big tech would prefer nobody cared about privacy.
People are starting to notice and care about these things.
Maybe I’m just not cynical enough about the “average” non-HN population but I think there are quite a few people who care.
Lots of people from all walks of life play board games. There are a lot of people who refuse to buy games made with AI generated assets. They go as far as making forums and tracking these things so that other folks can avoid them.
Well that's the thing that makes capitalism the most effective resource management system ever. If this is a bad play, and people do indeed find value from AI, it will be sink or swim. If it's not, then the ai forward will have to learn to stand out in the overwhelming sea of slop that they are competing against undifferentiated. That's why capitalists get paid so much, it's not a clear decision, in this case it seems contrarian, and if it pays off then they make money.
If they did use AI and still charged as much as they do for a sprue of models people would definitely be upset.
AI generated anything is seen as cheap. It is cheap. It generates “similar” reproductions from its training set. It’s called, “slop,” for a reason: low effort, low quality.
There have been quality issues in some of GW’s recent product lines, but for the most part they still have fans because the bar is already high for what they make.
Cutting costs to make an extra bunch by making the product crappier would be a kick to the knee. Fans already pay a premium for their products.
GW has put an immense amount of effort over time into reworking all their previously more generic marketing and lore elements into being more distinctly copyrightable and trademarkable. They're not going to let possible future lawsuits over LLM training data and the like screw that up.
And that’s probably the OpenAI killer. If any of my work product from now to 2030 could legitimately be entangled in any of the millions of coming copyright claims, I am in a world of hurt.
This fast run to use LLMs in everything can be undone by one court decision - and the sensible thing is to isolate as much as you can.
Also I don't think it will be easy to defend a copyright on AI-generated images, especially if your IP is 'lot of humanoid soldiers in power armor' and not specific characters.
Indemnification only means something if the indemnifying party exists and is solvent. If copyright claims on training data got traction, it would be neither, so it doesn't matter if they provide this or not. They probably won't exist as a solvent entity in a couple years anyway, so even the question of whether the indemnification means anything will go away.
> If any of my work product from now to 2030 could legitimately be entangled in any of the millions of coming copyright claims, I am in a world of hurt.
right... there has been ample code and visual art around to copy for decades, and people have, and they get away with it, and nothing bad happens, and where are the "millions of coming copyright claims" now?
i don't think what you are talking about has anything to do with killing openai, there's no one court decision that has to do with any of this stuff.
> there has been ample code and visual art around to copy for decades, and people have, and they get away with it, and nothing bad happens
Some genres of music make heavy use of 'samples' - tiny snippets of other recordings, often sub-5-seconds. Always a tiny fraction of the original piece, always chopped up, distorted and rearranged.
And yet sampling isn't fair use - the artists have to license every single sample individually. People who release successful records with unlicensed samples can get sued, and end up having to pay out for the samples that contributed to their successful record.
On the other hand, if an artist likes a drum break but instead of sampling it they pay another drummer to re-create it as closely as possible - that's 100% legal, no more copyright issue.
Hypothetically, one could imagine a world where the same logic applies to generative AI - that art generated by an AI trained on Studio Ghibli art is a derivative work the same way a song with unlicensed drum samples is.
I think it's extremely unlikely the US will go in that direction, simply because the likes of nvidia have so much money. But I can see why a cautious organisation might want to wait and see.
It always amuses me to think of technologists as the modern day Egyptian priests/magicians. To the uninitiated, what we do is probably just as opaque and mysterious. Plus we have our own fair share of charlatans.
I’m surprised they didn’t take the opportunity to lean into their existing properties which in universe treat AI as an abomination(the in universe phrase AI stands for “abominable intelligence”)
WFH vs in-office, AI mandatory vs AI forbidden: ideally I want my boss to let me work however I want, and ideally+realistically however makes me most productive.
> in Its Content or Designs
Personally: I'm a developer, so my situation is different. But right now I use AI code completion and Claude Code. I think I'd be fine without Claude Code, since it hasn't "clicked" for me yet; I think it's motivating, particularly for new features and boilerplate, but often (even with the boilerplate) must rewrite a lot of what it generates. Code completion would be harder, but maybe if the work was interesting and non-boilerplate enough I'd manage.
I've heard Claude Code has improved a lot very recently, so I would feel left behind without it completely, except I can use it in my spare time on personal projects. But if it keeps improving and/or ends up "clicking", then I may feel like I'm spinning my wheels at work.
Correct, all of these is about shifting the power dynamics into the middle managers rather than the engineers, who are already trusted to design and develop the system. It’s as if always there should be some gate keeping and wrangling work in place so these managers feel they are not useless in the overall process. Just design a process that ensures the outcome is of xyz quality and never check how and where it was done, but human nature to control is always there.
I don't do much with crypto/NFTs/AI, because I don't find any of it useful yet. But I get so much "with us or against us" heat for not being zealously against the the idea of them. It was NFTs, NFTs, NFTs at the table for months until it became AI, AI, AI. My preference is to talk about something else while playing board games.
One thing I've found when talking to non-technical board gamers about AI is that while they’re 100% against using AI to generate art or game design, when you ask them about using AI tools to build software or websites the response is almost always something like "Programmers are expensive, I can't afford that. If I can use AI to cut programmers out of the process I'm going to do it."
A minority are conflicted about this position.
When I talk to technical people at game nights we almost never talk about tech. The one time our programmers all played RoboRally the night kind of died because it felt too close to work for a Saturday night.
If GW was going to use AI they would probably start with sprue layouts. Maybe the AI could number the bits in sane way? I would be for that.
Three things:
1. People simply don't respect programming as a creative, human endeavour. Replacing devs with AI is viewed in the same way as replacing assembly line workers with robots.
2. Somewhat informed people might know that for coding tasks, LLMs are broadly trained on code that was publicly shared to help people out on Reddit or SO or as part of open-source projects (the nuance of, say, the GPL will be lost). Whereas the art is was trained on is, broadly speaking, copywritten.
3. And, related to two: people feel great sympathy for artists, since artists generally struggle quite a bit to make a living. Whereas engineers have solid, high paying white collar jobs; thus, they're not considered entitled to any kind of sympathy or support.
When the tech world realized their neato new invention inadvertently dropped a giant portion of the world's working artists into the toilet, they smashed that flusher before they could even say "oops." Endless justification, people saying artists were untalented and greedy and deserved to be financially ruined, with a heaping helping of "well, just 'pivot'."
And I did-- into manufacturing because I didn't see much of a future for tech industry careers. I'm lucky-- I came from a working class background so getting into a trade wasn't a total culture and environment shock. I think what this technology is going to do to job markets is a tragedy, but after all the shit I took as a working artist during this transition, I'm going to have to say "well, just pivot!" Better get in shape and toughen up-- your years of office work mean absolutely nothing when you've got to physically do something for a living. Most of the soft, maladroit, arrogant tech workers get absolutely spanked in that environment.
The overwhelmingly vast majority of the code you're talking about (basically, anything that doesn't explicitly disavow its copyright by being placed in the public domain, and there's some legal debate if that is even something that you can do proactively) is just as copyright protected as the art is.
Open Source does not mean copyright free.
"Free Software" certainly doesn't mean copyright free (the GPL only has any meaning at all because of copyright law).
Public Domain in the US is the only factor that truly matters on the Internet today, but people who care do both.
Release into the Public Domain and provide a 0-type license.
It is about scarcity: art is a passion; there is a perpetual oversupply of talented game designers, visual graphic artists, sculptors, magna artists, music composers, guitarists, etc...you can hire one and you usually can hire talent for cheap because...there is a lot of talent.
Programmers are (or were?) expensive because, at least in recent times, talented ones are expensive because they are rare enough.
Let's look at industry, and just go look at what video game artists make compared to programmers with a similar amount of experience. Now, are you just claiming that they just aren't very good artists, so they aren't paid well? Because I've seen their work, and its not shabby at all.
If you want a finished (nontrivial) program, chances are it's going to take at least an order of magnitude more than that.
You lay it out perfectly in your answer, and I'll add that the entire non-tech world generally feels that if tech jobs lose their shine due to AI, its actually a welcome reversion to the mean. Software has likely depressed wage growth in many other jobs.
So what? The code is offered under specific licensing terms. Not adhering to those terms is just as wrong as training on a paid product.
Artworks have their relatively-popular creative-commons stuff, and some of those follow a similar "do whatever" vibe, but I far more frequently see "attribution required" which generally requires it at the point of use, i.e. immediately along-side the art-piece. And if it's something where someone saw your work once and made something different separately, the license generally does not apply. LLMs have no way to do that kind of attribution though, and hammer out stuff that looks eerily familiar but isn't pixel-precise to the original, so it feels like and probably is an unapproved use of their work.
The code equivalent of this is usually "if you have source releases, include it there" or a very few have the equivalent of "please shove a mention somewhere deep in a settings screen that nobody will tap on". Using that code for training is I think relatively justifiable. The licenses matter (and have clearly been broadly ignored, which should not be allowed) but if it wasn't prohibited, it's generally allowed, and if you didn't want that you would need to choose a restrictive license or not publish it.
Plus, like, artists generally are their style, in practical terms. So copying their style is effectively impersonation. Coders on the other hand often intentionally lean heavily on style erasing tools like auto-formatters and common design patterns and whatnot, so their code blends cleanly in more places rather than sounding like exclusively "them".
---
I'm generally okay with permissive open source licensed code being ingested and spat back out in a useful product. That's kinda the point of those licenses. If it requires attribution, it gets murky and probably leans towards "no" - it's clearly not a black-box re-implementation, the LLMs are observing the internals and sometimes regurgitate it verbatim and that is generally not approved when humans do it.
Do I think the biggest LLM companies are staying within the more-obviously-acceptable licenses? Hell no. So I avoid them.
Do I trust any LLM business to actually stick to those licenses? ... probably not right now. But one could exist. Hopefully it'd still have enough training data to be useful.
Because it's not? Programmers' ethos is having low attachment to code. People work on code together, often with complete strangers, see it modified, sliced, merged and whatever. If you rename a variable in software or refactor a module, it's still the same software.
Meanwhile for art authorship, authenticity and detail are of utter importance.
Very reminiscent of the "software factory" bullshit peddled by grifters 15 or 20 years ago.
And I think, frankly, a lot of agile practice as I've seen it in industry doesn't respect software development as a creative endeavour either.
But fundamentally I, like a lot of programmers/developers/engineers, got into software because I wanted to make things, and I suspect the way I use AI to help with software development reflects that (tight leash, creative decision-making sits with me, not the machine, etc.).
> Games Workshop elects not to experience multi-year headache. Will use AI when profitable.
Indeed, companies will always start using something if it makes financial sense for them.
> One thing I've found when talking to non-technical board gamers about AI is that while they 100% against using AI to generate art or game design, when you ask them about using AI tools to build software or websites the response is almost always something like "Programmers are expensive, I can't afford that. If I can use AI to cut programmers out of the process I'm going to do it."
This is because they don't view programming as a "creative" form of labor. I think this is an incorrect view, but this knowledge is at least useful in weighting their opinions.
The most interesting observation is that regardless of how "anti-AI" most people seem to be, it isn't that deep of an opinion. Their stated preference is they don't want any AI anywhere, but their revealed preference is they'll continue to spend money as long as the product is good. Most products produced with AI, however, are still crap.
I actively don’t use AI because the results are unreliable or ugly. I’m just not against AI in principle. It’s funny that my position is considered contemptible by people who regularly use AI but are hard hardliners against it on moral grounds.
Remember when everything wasn’t a religious war? Actually, I don’t. It was always like this and it’s always going to be like this. Just one forever crusade after another.
You do not mention the perception of asymmetric legal and market power. Many people think that file sharing Disney movies is ok, but Google scraping the art of independent artists to create AI is not ok. That is not the same dynamic at all as not caring about copyright, and then suddenly caring about copyright.
Most people didn't choose to be part of your moon shot death cult. Only the people at the tippy top of the pyramid get golden parachutes off Musk's exploding rocket.
They never changed their position, corpos shouldn't get any money! That's always been the position. They are inherently unethical meat grinders.
I agree that this is often the case. I still see Games Workshop as an exception. They could have moved plastic production to a cheaper region (e.g. China), but they haven't done so. Financials are obviously important to them, but they're being very careful and thoughtful about their actions. This AI ban is just another showcase of that.
how can you go and generalize about these people, calling them idiots (that's what "deep of an opinion" means, even if you don't say that), and then breathlessly engage in the exact same rhetoric?
I had a conversation with an artist friend some time back. He uses Squarespace for his portfolio website. He was a few drinks in, and ranting about how even if it's primarily artists using these tools professionally at the moment, it'll still lead to a consolidation of jobs, how it's built on the skillset and learning of a broader community than those that will profit, etc. How the little guy was going to get screwed over and over by this sort of thing.
I started out doing webdesign work before I moved more to the operations and infrastructure management side of things, but I saw the writing on the wall with CMS systems, WYSIWYG editors, etc. At the time building anything decent still took someone doing design and dev work, but I knew that they would get better over time, and figured I should make the change.
So I asked him about this. I spoke about how yeah, the people behind Squarespace had the expertise - just like the artists using AI now - but every website built with it or similar is a job that formerly would have gone to the same sort of little guy he was talking about. How it's a consolidation of all the learnings and practices built out by that larger community, where the financial benefits are heavily consolidated. I told him it doesn't much matter to the end web designer whether or not the job got eliminated by non-AI automation and software or an LLM, the work is still gone and the career less and less viable.
I've had similar conversations with artists before. They invariably maintain that it's different, somehow. I don't relish jobs disappearing, but it's nothing new. Someday, maybe enough careers will vanish that we'll need to figure out some sort of system that doesn't involve nearly every adult human working.
Shelling out to support artists is seen as virtuous, and AI is seen as the opposite of that - not merely replacing artists but stealing from them. There's also a general perception that every cost-saving measure is killing the quality of the product.
So you've got a PR double-whammy there.
Your job is to create IP. As per the US Copyright Office, AI output cannot be copyrighted, so it is not anyone's IP, not yours, not your employer's.
That's not "anti-AI", that's AI and copyright reality. Game Workshop runs their business on IP, suddenly creating work that anyone can copy, sell or reproduce because it isn't anyone's IP is antithetical to the purpose of a for-profit company.
They will definitely start using AI when their competitors do to the point that they gain a substantial competitive advantage. Then, at least in a free market, their only choices are to use AI or cease to exist. At that point, it is more survival bias (companies that used AI survived) rather than profit motive (companies used AI to make more money).
* deprecating people's models so that they have to buy new ones
* making any number of rules changes that were widely hated
* making lore changes that were widely hated
They aren't going to lose customers because some other company is using AI. They effectively don't have any competition, because people love the Warhammer settings and want to play games set in them.
That is a false dichotomy. Eschewing AI may actually provide a competitive advantage in some markets. In other words, the third choice is to pivot and differentiate.
Their worlds are their monopolies. Worlds that now have multi-decades worth of lore investment (almost 50 years now I think).
Just because someone else can make cheaper little plastic models doesn't affect GW in the slightest. Or pump out AI slop stories.
The Horus Heresy book series is like 64 books now. And that's just a spin-off. It's set way before when 40k actually is set (10,000 years).
With so much lore they need complicated archiving and tracking to keep on top of it all (I happen to know their chief archivist).
You can't replace that. I only say all this just to try and explain how off the mark you are on understanding what the actual value of the company is.
I live in Nottingham where GW is based, another of my friends happens to have a company on an industrial estate where there are like 3 other tabletop gaming companies. All ex-gw staff.
You could probably fit all their buildings in the pub that GW has on its colossal factory site.
You used to know people who worked at Boots, which used to be the big Nottingham employer. Now days, I know more people who work at GW.
Plenty of people use proxies, too. There's places that do monthly packs of new STLs that could be an entire faction army, and there's long been places that sold "definitely not Space Marines and Sisters of Battle" minis too.
They don't have a threat of anyone overtaking them at current, but AI making alternatives in this vein even cheaper could eat away at portions of their bottom line.
And this is not complicated at all. It's the quality of output.
Users appreciate vibecoded apps but developers are universally unfazed about vibecoded pull requests. Lots of same devs use AI for "menial" tasks and business emails. And this is NOT a double standard: people are clearly ok when generative AI outputs may exist but aren't exposed to unsuspecting human eyes, and it's not ok if they are exposed to human eyes, because the data AIs generate haven't exceeded the low-side threshold of some sort. Maybe SAN values.
(also: IIUC, cults and ponzi scheme recruitment are endemic in tabletop game communities. so board game producers distancing from anything hot in those circles, even if it were slightly irrational to do so, also makes sense.)
I would describe them as anti-corporate IP/copyright cartel. They understand things like automobiles and personal computers require organized heavy lifting but laying claim to own our culture and entertainment, our emotional identity is a joke.
Just rich people controlling agency, indoctrinating kids with capitalist essentialism; by chance we were born before you and survived this long so neener neener! We own everything!
Such an unserious country.
NFTs/Crypto are just ways to do crimes/speculate/evade regulations. They aren't useful outside of "financial engineering." You were right to dismiss them.
LLMs are extremely useful for real world use cases. There are a lot of practical and ethical concerns with their use: energy usage, who owns them, who profits from them, slop generation, trust erosion... I mean, a lot. And there are indeed hucksters selling AI snake oil everywhere, which may be what tripped off your BS meter.
But fundamentally, LLMs are very useful, and comparing them to NFT/Crypto does a disservice to the utility of the tech.
Without disagreeing with your overall point in 99% of cases, we did actually have a good use for pinning things in the Bitcoin blockchain when I worked at Keybase. If you're trying to do peer-to-peer security, and you want to prove not only that the evil server hasn't forged anything (which you do with signatures) but also that it hasn't deleted anything legitimate, "throw a hash in the blockchain" really is the Right Way to solve that problem.
and it only requires the same electricity as a medium sized country to do it
continuously, forever
The other problem is that AI-generated material does itself not enjoy copyright protection.
Regardless, you should check out the AI features in the adobe products [1]. Generative removal, fill, etc [2].
AI, in modern tools, is not just "draw the scene so I can trace it".
[1] https://www.adobe.com/ai/overview/features.html
This anti-AI argument doesn't make sense, it's like saying it's impossible to reinvent multiplication based on reading a times table. You can create new things via generalization or in-context learning (references).
In practice many image generation models aren't that powerful, but Gemini's is.
If someone created one that output multi-layer images/PSDs, which is certainly doable, it could be much more usable.
Using Visual Studio, all the AI code generation is applying Microsoft's syntax style and not my syntax style. The return code line might be true but the layout / art / syntax is completely off. This with a solution that has a little less than one million lines of code, at the moment, which AI can work off of.
Art is not constant. The artist has a flow and may have an idea but the art will change form with each stroke with even removing strokes that are not fitting. I see as AI generated content lacks emotion from the artist.
Style is in the eye of the beholder and it seems that the text encoder just interpreted his name closely enough for it to seemingly work.
Early stable diffusion prompting was a lot of cargo cult copy pasting random crap in as part of every prompt.
And it's not that limiting. You aren't stuck with anything you start with. You can keep painting.
https://youtu.be/E3Yo7PULlPs?t=668
We'll have to see how this plays out. Games Workshop is (supposedly) notoriously litigous, and they've gone after artists who get too close to their art style. AI models are trained on that, so this is going to be an interesting thing to monitor.
I wanted to adapt my bike balancer to an unusual sized wheel and simply measured the diameter of the rod and the outer diameter of the hole where the axle slides into, asked an LLM to produce an adapter, converted to an STL and hit print and got a fully functional 3D printed part. Felt like living in the future, maybe one step away from owning a Star Trek Replicator.
Personally I would never pay for tabletop miniatures or lore books generated by AI. It's the same core problem as publishing regurgitated LLM crap in a book or using ChatGPT to write an email - I'm not going to spend my precious time reading something that the author didn't spend time to write.
I am perfectly capable of asking a model to generate a miniature, or a story, or a dumb comment for reddit. I have no desire to pay a premium for someone else to do it and get no value from the generated content - the only original part is the prompt so if you try to sell AI generated "content" you might as well just sell the prompt instead.
People are starting to notice and care about these things.
Maybe I’m just not cynical enough about the “average” non-HN population but I think there are quite a few people who care.
Lots of people from all walks of life play board games. There are a lot of people who refuse to buy games made with AI generated assets. They go as far as making forums and tracking these things so that other folks can avoid them.
AI generated anything is seen as cheap. It is cheap. It generates “similar” reproductions from its training set. It’s called, “slop,” for a reason: low effort, low quality.
There have been quality issues in some of GW’s recent product lines, but for the most part they still have fans because the bar is already high for what they make.
Cutting costs to make an extra bunch by making the product crappier would be a kick to the knee. Fans already pay a premium for their products.
Good on them for not going down that road.
This fast run to use LLMs in everything can be undone by one court decision - and the sensible thing is to isolate as much as you can.
Really interesting insight
right... there has been ample code and visual art around to copy for decades, and people have, and they get away with it, and nothing bad happens, and where are the "millions of coming copyright claims" now?
i don't think what you are talking about has anything to do with killing openai, there's no one court decision that has to do with any of this stuff.
Some genres of music make heavy use of 'samples' - tiny snippets of other recordings, often sub-5-seconds. Always a tiny fraction of the original piece, always chopped up, distorted and rearranged.
And yet sampling isn't fair use - the artists have to license every single sample individually. People who release successful records with unlicensed samples can get sued, and end up having to pay out for the samples that contributed to their successful record.
On the other hand, if an artist likes a drum break but instead of sampling it they pay another drummer to re-create it as closely as possible - that's 100% legal, no more copyright issue.
Hypothetically, one could imagine a world where the same logic applies to generative AI - that art generated by an AI trained on Studio Ghibli art is a derivative work the same way a song with unlicensed drum samples is.
I think it's extremely unlikely the US will go in that direction, simply because the likes of nvidia have so much money. But I can see why a cautious organisation might want to wait and see.
Seems like a missed bit of PR for their community
> in Its Content or Designs
Personally: I'm a developer, so my situation is different. But right now I use AI code completion and Claude Code. I think I'd be fine without Claude Code, since it hasn't "clicked" for me yet; I think it's motivating, particularly for new features and boilerplate, but often (even with the boilerplate) must rewrite a lot of what it generates. Code completion would be harder, but maybe if the work was interesting and non-boilerplate enough I'd manage.
I've heard Claude Code has improved a lot very recently, so I would feel left behind without it completely, except I can use it in my spare time on personal projects. But if it keeps improving and/or ends up "clicking", then I may feel like I'm spinning my wheels at work.