AI coding is gambling

(notes.visaint.space)

276 points | by speckx 4 hours ago

76 comments

  • itsgrimetime 3 hours ago
    All of this new capability has made me realize that the reason i love programming _isn't_ the same as the OP. I used to think (and tell others) that I loved understanding something deeply, wading through the details to figure out a tough problem. but actually, being able to will anything I can think of into existence is what I love about programming. I do feel for the people who were able to make careers out of falling in love w/ and getting good at picking problems & systems apart, breaking them down, and understanding them fully. I respect the discipline, curiosity, and intellect they have. but I also am elated w/ where things are at/going. this feels absurd to say, but I finally feel like I'm _good_ at programming, which is insane, because I literally haven't written a line of code myself in months, but having tools that can finally match the speed my ideas come to me is intoxicating
    • maplethorpe 21 minutes ago
      If there was a website called InfiniteAppStore, which contained every app imaginable, and where you could type in your search and it would return the code for that app, would you find that as satisfying to use as Claude Code?

      On the surface this does not sound as satisfying, because it more resembles shopping than coding. But once Claude Code is finally tuned to do its job perfectly, you will essentially be using that infinite app store. You're actually using it right now, every time you use Claude Code — just an imperfect version of it.

      If you enjoy using AI because it allows you to "will anything into existence", it's because the process is currently imperfect. Using Claude Code is closer to shopping than coding, but because the process is obfuscated, it feels like you're the one making the products in the shopping catalogue every time you place an order.

    • strangattractor 3 hours ago
      One size never fits all. I am old enough to remember what a game changer Spreadsheets (VisiCalc) where. They made the personal computer into a SwissArmy knife for many people that could not justify investing large sums of money into software to solve a niche problem. Until that time PCs simply were not a big thing.

      I believe AI will do something similar for programming. The level of complexity in modern apps is high and requires the use of many technologies that most of us cannot remotely claim to be expert in. Getting an idea and getting a prototype will definitely be easier. Production Code is another beast. Dealing with legacy systems etc will still require experts at least for the near future IMHO.

      • hungryhobbit 2 hours ago
        I remember when my dev team included some people using Emacs, some using Eclipse (this was pre-VS Code), and some using IntelliJ.

        Developers will always disagree on the best tool for X ... but we should all fear the Luddites who refuse to even try new tools, like AI. That personality type doesn't at all mesh with my idea of a "good programmer".

        • _se 1 hour ago
          Are you implying that someone who prefers Eclipse is more likely to be a good software engineer than someone who prefers Emacs? If so, that is so hilariously backwards that I can't even begin to understand the types of experiences that you must've had.

          I am sure that you're objectively wrong if that is what you're saying.

          • altruios 45 minutes ago
            I'm reading it as: those unwilling to try both and make an honest evaluation and instead have preconceived notions and bigotry tend to make bad programmers. That preferences are fine, but dogmatism should be avoided.
        • tovej 1 hour ago
          I will try anything reasonable. And have tried LLM tools for programming. But there's no way I would use it daily. It's too inefficient, too error prone, and will actively make me a worse programmer (as I will be writing less code and making fewer decisions. I will also understand less of the systems I'm building).

          All the excellent developers around me are _not_ using AI except for very small, contained tasks.

        • rsoto2 2 hours ago
          Flat out wrong. The most impressive engineers I've met in my career did not care for fancy tools with bells and whistles.
          • hext 1 hour ago
            Sure, I bet they didn't outright dismiss them as useless to the entire field though! I'm sure they still understood the value those fancy tools provided to their peers.
            • skydhash 57 minutes ago
              Unless someone is trolling, it’s rare for people to deem it as “useless”. Most counterpoints have been about ethics and issues that surround LLM usage. Things like licensing, coding vs review time, correctness and maintainability of the generated code, etc… Unless you believe we’re in a software engineering utopia, I think it’s fair to call those out.
    • estimator7292 20 minutes ago
      For me the joy comes from the understanding that the answer to "Is xyz possible?" is always, always "yes". It might be difficult, expensive, or take a long time, but my stance as an engineer is that anything is possible.

      Hyperbole, yes, many things are in fact, not possible. But most people have the size of the two categories confused. The number of things that are categorically impossible is less than a rounding error compared to how many things are possible.

      The joy and wonder of being an engineer is in taking problems deemed "impossible" and creating possibilities. It's in extracting a solution from infinite possibilities and redefining what possible even is.

    • bakugo 1 hour ago
      > Going to McDonalds made me realize that the reason I love cooking isn't the actual cooking itself. Being able to order a food at McDonalds and getting it without doing anything myself is the best part about cooking! Now that I only eat McDonalds, I feel like I'm _good_ at cooking.

      You do not like and have never liked programming. You wanted to be a manager. They are completely different things.

    • applfanboysbgon 2 hours ago
      > but I finally feel like I'm _good_ at programming, which is insane

      Yes, it is insane. You couldn't torture this confession out of me. But that's the drug they're selling you, isn't it? You don't even write code, but you're getting a self-inflated sense of worth. It must be addicting! Of course, whether or not the programs you prompt are actually good surely has no relation to whether you feel they're good, since you're not the one writing them, and apparently were not capable of writing them before so are not qualified to review them very much.

      > having tools that can finally match the speed my ideas come to me

      Anyone can be an "ideas guy". We laughed at those people, because having ideas is not the hard part. The hard part was in all of the hundreds and thousands of little details that go into building the ideas into something actually worthwhile, and that hasn't changed. LLMs can build an idea into a prototype in a weekend. I am still waiting to see LLMs build an idea into something other people use at scale, once, ever, other than LLM wrappers. Either every person who is all-in on vibes only has ideas that consist of making .md files and publishing them as a "meta agent framework", or LLMs are not actually doing a great job of translating ideas into tangibly useful software.

      • 542458 2 hours ago
        > Anyone can be an "ideas guy".

        I disagree with this. I've worked with amazing "ideas guys" who just cranked out customer insights and interesting concepts, and I've worked with lousy ones, who just kinda meandered and never had a focused vision beyond a milquetoast copy of the last thing they saw. There's a real skill to forming good concepts, and it's not a skill everyone has!

        • applfanboysbgon 2 hours ago
          I do agree that having good ideas is a skill in its own right. But people with bad ideas are idea guys too! You see them all the time in the indie game development scene in particular. "I need a programmer, and an artist, and a composer, to build this amazing idea for me!", together with an 8 paragraph wall of text (the paragraphs are if you're lucky) describing the idea, and as you'd expect from somebody who couldn't be bothered to develop a single skill, their game ideas are exactly as good as their programming, art, and music.

          I find that the strength of people's ideas tends to be highly correlated with their overall skills. I don't know that you can develop the capability for good ideas without getting your hands dirty learning a field, experimenting, absorbing all kinds of information and understanding what really goes into the making of a good idea. In that way, the person with good ideas always ends up being more than just a ideas guy. They don't just have good ideas, they have good ideas and the skills to back them up. Whereas the "ideas guy" label is usually applied to people who have nothing to bring to the table other than their ideas, and wouldn't you know it, they aren't nearly as good as they think they are.

        • Aurornis 1 hour ago
          I think the Product Manager title was (and still is) one of the most abused titles in tech. A great product manager is indispensable for setting product direction in a way that can't be accomplished by others doing it part-time or advocating for their own needs. I've worked with some truly great product managers.

          I've also worked with a lot of awful product managers. The product manager title is squishy enough that it gets assigned to people with charisma or confidence without actual skills to follow through. A bad product manager can blend in to a company for years by relaying ideas around from one group to another and having ChatGPT write documents. The engineers on the ground see the incompetence long before it becomes undeniable at the higher ranks.

          When I read Hacker News and other sites I suspect a lot of engineers have only ever worked with bad PMs from the latter category.

          • angrymouse 1 hour ago
            What do you think the good ones do? And how do they set direction in a way that’s good compared to a bad one?
        • moduspol 1 hour ago
          Anyone can be an "ideas guy" because there's no failure event that stops you. Contrast this with being a plumber. Not anyone can be a plumber.
          • tekacs 1 hour ago
            I think that the point about building with agents though. Your ideas meet reality sooner and you actually get feedback on whether they are worth anything or not. So you're not really being an ideas guy in the sense of just throwing ideas out there. You're being an ideas guy in the sense of testing your ideas, which is really the essence of what building startups is: figuring out what people want.
            • moduspol 50 minutes ago
              That's true. I was just responding to the post above, which seemed to be inferring a different meaning (i.e. that there are no bad or good ideas guys) than how I interpreted it.
      • thangalin 1 hour ago
        > LLMs are not actually doing a great job of translating ideas into tangibly useful software

        Here is the source code for a greenfield, zero-dependency, 100% pure PHP raw Git repository viewer made for self-hosted or shared environments that is 99.9% vibe-coded and has had ~10k hits and ~7k viewers of late, with 0 errors reported in the logs over the last 24 hours:

        https://repo.autonoma.ca/repo/treetrek

      • jmuguy 1 hour ago
        A lot of this is also missing understanding the software we're creating. I have a deep knowledge of our SaaS because I've spent years working on coding it. If I had been prompting an LLM this entire time, I can't imagine I would actually have near the same understanding. That is assuming purely planning and prompting could actually result in a product that's in active use for years and not just a pile of prototypes which apparently desperately needed to be created and were just waiting for AI to come along to make it possible.

        I've been using AI tools more but this idea of never actually writing any code seems way too black and white to be serious.

      • supern0va 2 hours ago
        >Anyone can be an "ideas guy".

        I think there's way more nuance to this than you're willing to admit here. There's a significant difference between the guy who thinks "I'm going to make X app to do Y and get loaded." and the person who really understands the details of what they want to create and has a concrete vision of how to shape it.

        I think that product shaping and detail oriented vision of how something should work and be used by people is genuinely challenging, wholly aside from the lower level technical skills required to execute it.

        This is part of the reason why I wouldn't be surprised at all to see product manager types getting more hands-on, or seeing the software engineering profession evolve into more of a PM/SDE hybrid.

      • bartread 41 minutes ago
        > You don't even write code, but you're getting a self-inflated sense of worth.

        That’s because when it comes to delivering value, code doesn’t matter: outcomes do.

        If I spend 10 hours hand coding something versus prompting an LLM to create a solution that delivers the same outcome in a few minutes, and I can get that solution into production in under an hour from the moment my fingers first touch the keyboard to start writing the prompt, well, whilst these solutions might both deliver the same value, the ROI differs significantly.

      • xg15 1 hour ago
        > LLMs can build an idea into a prototype in a weekend

        Just to nitpick, because I think the difference is relevant: "Idea to prototype in a weekend" was possible for a spirited coder already before LLMs.

        Now it's "Idea to prototype in a few minutes".

      • thunky 1 hour ago
        > Anyone can be an "ideas guy". We laughed at those people, because having ideas is not the hard part.

        Sure it's easy to create bad ideas. Not easy at all to create good ones.

      • paultendo 1 hour ago
        Anyone can be an "ideas guy", very few are good at it.

        "I am still waiting to see LLMs build an idea into something other people use at scale" - so Microsoft using Claude Code doesn't count?

        • applfanboysbgon 1 hour ago
          Nope. I specifically excluded LLM wrappers, which I think is a fair qualification for a "first useful software at scale". If it turns out that LLMs can produce useful things that aren't LLM wrappers, then maybe later we can evaluate whether LLM wrappers are worthwhile. But if LLM wrappers are only used to produce other LLM wrappers, which are used to produce other LLM wrappers, it's merely indicative of a pyramid scheme wherein people are trying to sell you on hype because they can't sell you anything that actually produces utility in the real world (browsers, compilers, IDEs, production databases, music production software, photo editing software, Excel, viable Discord replacement, any of the reasons people used computers as tools to accomplish things).

          On the note of Microsoft specifically, they've shipped a critical OS-destroying bug every month for several months straight now, and people seem to be generally in agreement that Windows 11 has only been going further and further downhill. I have literally not seen a single person with a positive opinion on anything W11 or associated programs have done in the last 6 months. Which does not create a compelling case for translating LLM wrapper into real-world useful code.

      • awesome_dude 1 hour ago
        > because having ideas is not the hard part.

        I agree. It's the "buy in" from the market.

        The biggest names in Software Products have (other peoples) ideas to sell, they're selling the buggy versions of those ideas - Microsoft, Salesforce, even early Facebook, these weren't triumphs of 'monk-like discipline' in the code. They were triumphs of market buy in and timing.

      • huflungdung 1 hour ago
        [dead]
    • bluefirebrand 3 hours ago
      > but I finally feel like I'm _good_ at programming, which is insane, because I literally haven't written a line of code myself in months

      This is exactly the sort of mentality that makes me hate this technology

      You finally feel good at programming despite admitting that you aren't actually doing it

      Please explain why anyone should take this seriously?

      • pdntspa 3 hours ago
        Because the programming is and was always a means to an end. Obsessing over the specific mechanical act of programming is taking the forest for the trees.

        I agree with gp that the speed in which I am able to execute my vision is exhilarating. It is making me love programming again. My side projects, which have been hanging on the wall for years, are actually getting done. And quickly!

        The actual act of keying in code is drudgery for me. I've written so much code in so many languages that it is hard not to hate them all. Why the fuck is it a hash in ruby but a dict in python? How the hell do I get the current unixtime in this language again?!? Why the fuck do I need to learn yet another stupid vocabulary for what is essentially databinding? Who cares, let the AI handle it

        • mkehrt 3 hours ago
          None of my side projects are things where I want the output. They're all things where I want to write the code myself so I understand it better. AI is antithetical to this.
          • dolebirchwood 2 hours ago
            I have three side projects that revolve around taking public access data from shitty, barely usable local government websites, and then using that data to build more intuitive and useful UIs around them. They're portfolio pieces, but also a public service. I already know how to build all of these systems manually, but I have better things to do. So, hell yeah I'm just going to prompt my way to output. If the code works, I don't care how it was written, and neither do the members of my community who use my free sites.
          • pdntspa 3 hours ago
            All of my side projects scratch an itch, so I do want the output. There are not enough hours in the day for me to make all the things I want to make. Code is just the vessel, and one I am happy to outsource if I can maintain a high standard of work. It's a blessing to finally find a workflow that makes me feel like I have a shot at building most of the things I want to.
            • cess11 2 hours ago
              Are these things that no one previously built and published, so you can go and take a look at their implementation?
              • pdntspa 2 hours ago
                Possibly. Mostly?

                I wanted a stackable desk tray shelf thing for my desk in literally any size for my clutter. Too lazy to go shopping for one, and couldn't find one on any of the maker sites, so I had claude write me an openSCAD file over lunch break then we iterated on it after-hours. By end of work next day I had three of them sitting on my desk after about 3 hours of back-and-forth the night before (along with about half a dozen tiny prototypes), and thats including the 2hr print time for each shelf.

                I want a music metadata tool that is essentially TheGodfather but brought into the modern day and incorporates workflows I wish I had for my DJing and music production. And not some stupid web app, a proper desktop app with a proper windowing toolkit. I'd estimate it would take me 12-18 months to get to a beta the old way, to the exclusion of most of my other hobbies and projects, instead first Gemini then Claude and I managed to get a pretty nice alpha out in a few months over the summer while I was unemployed. There's still a lot left I want to add but it already replaced several apps in my music intake workflow. I've had a number of successful DJ gigs making use of the music that I run through this app. Funny enough the skills I learned on that project landed me a pretty great gig that lets me do essentially the same thing, at the same pace, for more pay than I've ever made in my SWE career to-date.

                A bunch of features for my website, a hand-coded Rails app I wrote a few years ago, went from my TODO pile to deployment in just a couple of hours. Not to mention it handled upgrading Ruby and Rails and ported the whole deployment to docker in an afternoon, which made it easy to migrate to a $3 VPS fronted by cloudflare.

                I have a ton of ideas for games and multimedia type apps that I would never be able to work on at an acceptable pace and also earn the living that lets me afford these tools in the first place. Most of those ideas are unlike any game I've ever seen or played. I'm not yet ready to start on these yet but when/if I do I expect development to proceed at a comfortably brisk pace. The possibilities for Claude + Unreal + the years and years of free assets I've collected from Epic's Unreal store are exciting! And I haven't even gotten into having AI generate game assets.

                So idunno, does that count?

                • tovej 1 hour ago
                  Would you share the music app? Do you have a public repo or demo somewhere?

                  You didn't really describe it very much, so it's hard to say what it actually does. I'm interested in evaluating the quality of vibecoded projects people actually use.

                  • pdntspa 1 hour ago
                    At a later date, perhaps. I haven't messed with this project since I got employed and it was written over summer 2025, when the tooling for agentic development was a lot worse. (Very ADD here) There's also the open question of how best to package a python app that makes use of PyTorch and SciPy for distribution to nontechnical users. I want to solve that before I start putting this in other people's hands.

                    Careful with the term 'vibe coded', that does not characterize how I work.

                    • tovej 29 minutes ago
                      Vibecoding is the term for building software with LLM tools. Did you do something different?

                      I'm just getting tired of hearing claims of incredible software being built with LLM-based tools, but when I ask to see them, I get nothing.

                      Your claim of 12-18 months for a windowed music metadata app seem weird. That seems like about a week with Dear ImGui and some file format reading libraries to me. Am I missing something?

                      • pdntspa 1 minute ago
                        > Vibecoding is the term for building software with LLM tools

                        without manual review and guidance. Coasting along purely on vibes. Hence the name. Agentic development is the middle ground where you're actively reviewing and architecting.

                        Dear Imgui isn't a 'proper' windowing toolkit. It's immediate-mode, it doesn't use OS affordances. Its not WinForms or GTK or QT (though to be fair QT isn't quite native but its by far the closest)

                        I never made any claims of 'incredible software'. I am building things that I need and want. I will give them to the world if I so choose and if they are good enough. And its not there yet.

                        And considering that I have almost zero domain knowledge in the area of DSP or audio analysis, that I'd only have a couple hours a day to work on it at best (energy, motivation, and other factors notwithstanding), and the amount of learning it would take to get to the point where something like that would be "about a week" is where most of that 12-18 months goes. And yes the metadata part is easy, but the code that generates the metadata that is good enough to perform with? That's hard.

          • hk__2 3 hours ago
            All my side projects exist to solve a problem.
        • hk__2 2 hours ago
          > The actual act of keying in code is drudgery for me. I've written so much code in so many languages that it is hard not to hate them all. Why the fuck is it a hash in ruby but a dict in python? How the hell do I get the current unixtime in this language again?!? Why the fuck do I need to learn yet another stupid vocabulary for what is essentially databinding?

          These are the downsides, but there are also upsides like in human languages: “wow I can express this complex idea with just these three words? I never though about that!”. Try a new programming paradigm and that opens your mind and changes your way of programming in _any_ language forever.

        • bigstrat2003 3 hours ago
          > Because the programming is and was always a means to an end.

          No. Programming is a specific act (writing code), and that act is also a means to an end. But getting to the goal does not mean you did programming. Saying "I'm good at programming" when you are just using LLMs to generate code for you is like saying "I'm good at driving" when you only ever take an Uber and don't ever drive yourself. It's complete nonsense. If you aren't programming (as the OP clearly said he isn't), then you can't be good at programming because you aren't doing it.

          • NewsaHackO 2 hours ago
            I guess I agree with you, but I think the GP may have mispoke and meant he loves building software. It's sort of like the difference between knitting and making clothes. The GP likely loves making clothes on an abstract basis and realized that he won't have to knit anymore to do so. And he really never liked knitting in the first place, as it was just a means to an end.
            • datavalue 39 minutes ago
              It’s similar to the arrival of mechanized looms in the 19th century. My ancestors were weavers, and automation eventually replaced those jobs. I’ve spent 40 years working in IT as a programmer and am now nearing retirement, so I’ve been fortunate. To me it feels like programming as a skill may not have much time left. Probably how my ancestors felt.
            • munk-a 2 hours ago
              Most people who are knitting do it purely for the experience of knitting. If you need clothes it's far more affordable to buy the cheap manufactured stuff. Some people certainly enjoy the creativity of expression and wish they could get to that easier - but most of those people have moved away from manual tasks like knitting and instead just draw or render their imagination. There's genuine value in making things by hand as the process allows us time to study our goal and shape our technique mid-approach. GP may legitimately like knitting more than making clothes.
              • NewsaHackO 2 hours ago
                I think you misunderstood my post. Now many people do knitting for the joy of knitting, but people used to knit to create clothing to wear or to sell. Of course, automated knitting machines have largely replaced hand knitting, and people now still do it. If you are very good at hand knitting, you might see if you can sell some work. However, if you want to make knitted clothing at scale, you would be better served taking a high-level approach to the actual design of the clothing and learning how to prompt the automated knitting machine to do so instead of optimizing for how you yourself would hand knit it.
                • munk-a 21 minutes ago
                  That would be a maximally economically efficient approach to producing knit clothes - but hand knit clothing still does have a significant market. This year I sought out a cobbler to get a new pair of shoes because my feet are a bit weird and the machine templates for what a foot should look like doesn't produce something I can comfortably wear. If you personally derive value from putting in the manual labor to produce "artisanal" goods in most fields you can find a market willing to pay the premium for your labor. This market is much smaller than the machine-driven equivalent so it can't support nearly the same quantity of producers as the market supported before automation came along but it is a niche you can operate within.

                  I don't disagree with your main thesis that an automated knitting machine can out produce hand-knit goods but I do think you're under appreciating that there still is a market for the non-automated goods. Even if they can't compete for the majority of the market markets are weird and non-uniform so those skills do still feed into a market.

                • r-w 1 hour ago
                  [dead]
          • pdntspa 3 hours ago
            I'm still reading the code, I'm still correcting the LLM's laughably awful architecture and abstractions, and I'm still spending large chunks of time in the design and planning phase with the LLM. The only thing it does is write the code.

            But that's not programming because its a natural-language conversation?

            • californical 1 hour ago
              I mean, yes - you’re reviewing and architecting, but not creating.

              Same as if you use an image diffusion model. You can describe very clearly what you want, and iterate carefully until you get a picture that looks good. But nobody would say that they “drew a nice picture”, since they haven’t done any drawing.

              (except maybe the mega-power-users who use the tool and have a warped view of their accomplishment)

            • bakugo 1 hour ago
              > But that's not programming because its a natural-language conversation?

              Correct. Programming is writing code. You are not writing code, therefore you are not programming. I don't understand what's so complicated about this.

              • pdntspa 1 hour ago
                I'm literally making a program. Present-progressive of the verb to program. I feel like you're pearl-clutching on semantics. By my read, programming != writing code, but writing code is most definitely programming. Oxford defines 'to program' as both.
        • bakugo 1 hour ago
          Sounds like you just don't like programming. And that's okay! It's okay to not like things.

          But "I love programming now that I don't do any programming" is an utterly nonsensical statement. Please stop and reflect over what you said for a moment.

          • pdntspa 39 minutes ago
            Substitute it with "the mechanical act of writing code" and maybe it will make more sense. I have been clumsy with my vocabulary here, forgive me.
        • beepbooptheory 2 hours ago
          "I really really love cooking. In fact, I have optimized my cooking completely, I go out to restaurants every night!"

          I believe gp and others just like food instead of cooking. Which is fine, but if that's the case, why go around telling everyone you're a cook?

          • pdntspa 33 minutes ago
            "I thought using loops was cheating, so I programmed my own using samples. I thought using samples was cheating, so I recorded real drums. I thought that was cheating, so I learned to play. I thought using purchased drums was cheating, so I made my own. I thought using pre-made skins was cheating, so I killed a goat and skinned it. I thought that was cheating too, so I raised my own goats from birth. I haven't made any music lately, what with the goat farming and all."
          • RcouF1uZ4gsC 1 hour ago
            But are you doing real food preparation unless you are hunting and dressing the animals and foraging for your own food?
            • dorkrawk 1 hour ago
            • r-w 1 hour ago
              Yes. You are doing any of the work yourself rather than instructing someone else on how to do it.
              • sebzim4500 1 hour ago
                You are doing something, but 99% of the work has been done for you. I guess it's like vibe coding and telling the model to fix issues when you see them.
            • beepbooptheory 52 minutes ago
              Ah geeze. I am utterly destroyed by this comment. Will need to sit and think now.
      • wmeredith 2 hours ago
        I think this is a semantics thing. I feel the same way, but I wouldn't say that I feel like I'm good at programming. I'm most certainly not. What I am good at is product design and development, and LLM tech has made it so that I can concentrate on features, business models, and users.
      • throwawaytea 1 hour ago
        I know how to build a house for the most part. But I don't have time to build a house.

        If I get a robot someday and manage it daily before I leave for work to slowly build a house, when it's done, I gotta be honest and admit I'll consider myself a home builder.

        Otherwise, who is a home builder? Very few people do every single part themselves, even if they technically could.

      • MattGaiser 2 hours ago
        Different definitions of programming.

        OP defines it as getting the machine to do as he wants.

        You define it as the actual act of writing the detailed instructions.

        • bluefirebrand 2 hours ago
          It is very difficult to get the machine to do what you want without the detailed instructions

          If you have an LLM generate the instructions, then the LLM is programming, you're just a "prompter" or something. Not a programmer

          • r-w 1 hour ago
            Exactly. There's a probabilistic machine in between you and every instruction that gets executed, without exception. It's straight up different.
      • thendrill 3 hours ago
        I see alot of people get really confused between the act of writing code VS. programming...

        Programming is willing the machine to do something... Writing code is just that writing code, yes sometimes you write code to make the machine do something and other times you write code just to write code ( for example refactoring, or splitting logic from presentation etc.)

        Think about it like this... Everyone can write words. But writing words does not make you a book writer.

        What always gets me is that the act of writing code by itself has no real value. Programming is what solves problems and brings value. Everyone can write code, not everyone can "program"....

        • bigstrat2003 3 hours ago
          Programming is writing code. There's nothing to confuse because that's what the word means.
          • simplyluke 2 hours ago
            Is it? I wouldn't consider punch cards writing code but they were certainly programming. Programming is a broader concept than code in a text file.
          • ModernMech 2 hours ago
            They're saying writing code is programming but not all programming is writing code. What is Scratch?
            • r-w 1 hour ago
              A graphical means of writing and manipulating a program.
              • boc 1 hour ago
                Aka Claude Code.
      • poszlem 2 hours ago
        Why do you feel good about programming despite not writing in machine code?
        • bakugo 1 hour ago
          False equivalence. x86 assembly is a programming language, C is a programming language, Javascript is a programming language. English is NOT a programming language.

          If it was, you wouldn't need "AI" to convert English into a real programming language before that, in turn, can be converted to machine code.

          • throwawaytea 1 hour ago
            My boss can make people do countless things in the proper order, with just a few words. Sounds like a programming language to me.
      • orsorna 3 hours ago
        Well for one, programming actually sucks. Punching cards sucks. Copywriting sucks. Why? Well, implementation for the sake of implementation is nothing more than self-gratifying, and sole focus on it is an academic pursuit. The classic debate of which programming language is better is an argument of the best way to translate human ideas of logic into something that works. Sure programming is fun but I don't want to do it. What I do want to do is transform data or information into other kinds of information, and computing is a very, very convenient platform to do so, and programming allows manipulation of a substrate to perform such transformations.

        I agree with OP because the journey itself rarely helps you focus on system architecture, deliverable products and how your downstream consumers use your product. And not just product in the commercial sense, but FOSS stuff or shareware I slap together because I want to share a solution to a problem with other people.

        The gambling fallacy is tiresome as someone who, at least I believe, can question the bullshit models try to do sometimes. It is very much gambling for CEOs, idea men who do not have a technical floor to question model outputs.

        If LLMs were /slow/ at getting a working product together combined with my human judgement, I wouldn't use them.

        So, when I encounter someone who doesn't pin value into building something that performs useful work, only the actual journey of it, regardless of usefulness of said work, I take them as seriously as an old man playing with hobby trains. Not to disparage hobby trains, because model trains are awesome, but they are hubris.

        • munk-a 2 hours ago
          > Well for one, programming actually sucks. Punching cards sucks. Copywriting sucks.

          There's a significant difference between past software advancements and this one. When we previously reduced the manual work when developing software it was empowering the language we were defining our logic within so that each statement from a developer covered more conceptual ground and fewer statements were required to solve our problems. This meant that software was composed of fewer and more significant statements that individually carried more weight.

          The LLM revolution has actually increased code bloat at the level humans are (probably, get to that in a moment) meant to interact with it. It is harder to comprehend code written today than code written in 2019 and that's an extremely dangerous direction to move in. To that earlier marker - it may be that we're thinking about code wrong now and that software, as we're meant to read it, exists at the prompt level. Maybe we shouldn't read or test the actual output but instead read and test the prompts used to generate that output - that'd be more in line with previous software advancements and it would present an astounding leap forward in clarity. My concern with that line of thinking is that LLMs (at least the ones we're using right now for software dev) are intentionally non-deterministic so a prompt evaluated multiple times won't resolve to the same output. If we pushed in this direction for deterministic prompt evaluation then I think we could really achieve a new safe level of programming - but that doesn't seem to be anyone's goal - and if we don't push in that direction then prompts are a way to efficiently generate large amounts of unmaintained, mysterious and untested software that won't cause problems immediately... but absolutely does cause problems in a year or two when we need to revise the logic.

        • pton_xd 1 hour ago
          > Well for one, programming actually sucks.

          I'll never understand those in a field who hate the day-to-day details of their job. You're intelligent, why not do something you actually enjoy engaging with?

          Maybe now with the advancement of the field you're finally enjoying yourself, but why were you subjecting yourself to daily misery for so long in the first place? I don't get it.

          • orsorna 1 hour ago
            Well I just explained what I actually enjoy about programming, which is the results of it. Many jobs have intermediate boring steps that build to something satisfying.

            >but why were you subjecting yourself to daily misery for so long in the first place? I don't get it.

            It just meant it took a lot longer to build something, to get that satisfaction.

        • bluefirebrand 2 hours ago
          > Well for one, programming actually sucks

          Speak for yourself. Programming is awesome. I love it so much and I hate that AI is taking a huge steaming dump on it

          > So, when I encounter someone who doesn't pin value into building something that performs useful work, only the actual journey of it, regardless of usefulness of said work, I take them as seriously as an old man playing with hobby trains

          Growing and building rapidly at all costs is the behavior of a cancer cell, not a human

          I love model trains

          • orsorna 1 hour ago
            Your cancer cell analogy is moot unless you paint all AI generated applications to be unusable trash, which is not the case, and I wouldn't describe my own work with it. It's true that standards have dropped to the floor where anyone can "ship" something but doesn't mean it's good. I think I have a better handle on how to steer GenAI versus the average linkedinbro. But the divide between journey and destination is valid, I guess it's something that hasn't been explored until GenAI.
    • manmal 2 hours ago
      I've felt this exact same way until very recently. But in the end, it's slop that never quite does what it's supposed to. Anthropic is proud of themselves that they brute-forced the world's crappiest C compiler into existence. Guess what, nobody will use it.
  • watzon 3 hours ago
    I think this article makes a valid point. However, if AI coding is considered gambling, then being a project manager overseeing multiple developers could also be seen as a form of gambling to a certain degree. In reality, there isn't much difference between the two. AI models are non-deterministic, and humans are also non-deterministic. You could assign the same task to two different developers and end up with entirely different results.
    • yoyohello13 3 hours ago
      I think the addiction angle seems to make AI coding more similar to gambling. Some people seem to be disturbingly addicted to agentic coding. Much more so than traditional programming. To the point of doing destructive things like waking up in the middle of the night to check agents. Or giving an agent access to their bank account.
      • shepherdjerred 2 hours ago
        I mean, it’s just so fun. Claude wrote a native macOS app for me today.

        I don’t think I’d describe my behavior as destructive though

      • deadbabe 2 hours ago
        I know at least one case where the obsession with agents ruined a marriage.
    • m00x 3 hours ago
      AI coding is gambling on slot machines, managing developers is betting on race horses.
      • SkyPuncher 3 hours ago
        Only if your AI coding approach is the slot machine approach.

        I've ended up with a process that produces very, very high quality outputs. Often needing little to no correct from me.

        I think of it like an Age of Empires map. If you go into battle surrounded by undiscovered parts of the map, you're in for a rude surprise. Winning a battle means having clarity on both the battle itself and risks next to the battle.

        • murkt 2 hours ago
          Good analogy! Would be interesting to read more details about how you’re getting very high quality outputs
        • Obscurity4340 2 hours ago
          Would you mind sharing some of your findings?
        • input_sh 2 hours ago
          Until it produces predictable output, it's gambling. But it can't produce predictable output because it's a non-deterministic tool.

          What you're describing is increasing your odds while gambling, not that it's not gambling. Card counting also increases your odds while gambling, but it doesn't make it not gambling.

          • IanCal 2 hours ago
            This is a pretty wild comparison in my opinion, it counts almost everything as gambling which means it has almost no use as a definition.

            The most obvious issue is it’d class working with humans as gambling. Fine if you want to make that as your definition but it seems unhelpful to the discussion.

            • input_sh 29 minutes ago
              You seem to have a fundamental issue understanding what the term deterministic even means.

              If you give the same trivial task to the same human five times in a row, let's say wash the dishes, your dishes are either gonna be equally clean or equally not clean enough every time. Hell, it might even get better over time by giving them feedback at the end of the task that it can learn from.

              If you run the same script five times in a row while changing some input variables, you're gonna get the same, predictable output that you can understand, look at the code, and fix.

              If you ask the same question to the same LLM model five times in a row, are you getting the same result every time? Is it kind of random? Can the quality be vastly different if you reject all of its changes, start a new conversation, and tell it to do the same thing again using the exact same prompt? Congrats, that's gambling. It's no different than spinning a slot machine in a sense that you pass it an input and hope for the best as the output. It is different than a slot machine in a sense that you can influence those odds by asking "better", but that does not make it not gambling.

            • RhythmFox 1 hour ago
              How does it 'count almost everything as gambling'? They just said 'non-deterministic' output is gambling-like, that is not 'almost everything'. Most computation that you use on a day-to-day basis (depending on how much you use AI now I suppose) is in all ways deterministic. Using probabilistic algorithms is not new, but it your point is not clicking...
              • organsnyder 1 hour ago
                Working with humans is decidedly not deterministic, though. And the discussion here is comparing AI coding agents and humans.
                • RhythmFox 1 hour ago
                  That starts to get into a very philosophical space talking about human action as deterministic or not. I think keeping to the fact that the artifacts (ie code) we are working off will have deterministic effects (unless we want it not to) is exactly the point. That is what lets chaotic human brains communicate with machines at all. Adding more chaos to the system doesn't strike me as obviously an improvement.
          • darkhorse222 2 hours ago
            Similar to quantum computing, a probabilistic model when condensed to sufficiently narrow ranges can be treated as discrete.
      • bazmattaz 3 hours ago
        Dam this is so accurate. As a project manager turned product manager this is so true. You need to estimate a project based on the “pedigree” of your engineers
      • munk-a 2 hours ago
        Would it make us uncomfortable to reword the above example to

        > AI coding is gambling on slot machines, managing developers is gambling on the stock market.

        Because I feel like that is a much more apt analogy.

      • cko 2 hours ago
        What is it with you guys and stallions?
        • deadbabe 2 hours ago
          There is a long history of managers just wanting to work their developers like horses.
      • edu 2 hours ago
        Great analogy, I’m saving it!
    • MeetingsBrowser 3 hours ago
      You (in theory) have more control over the quality of the team you are managing, than the quality of the models you are using.

      And the quality of code models puts out is, in general, well below the average output of a professional developer.

      It is however much faster, which makes the gambling loop feel better. Buying and holding a stock for a few months doesn't feel the same as playing a slot machine.

      • PaulHoule 3 hours ago
        One difference is those developers are moral subjects who feel bad if they screw up whereas a computer is not a moral subject and can never be held accountable.

        https://simonwillison.net/2025/Feb/3/a-computer-can-never-be...

        • ponector 2 hours ago
          Right, you need to hire a scapegoat. Usually tester has that role: little impact but huge responsibility for quality.
      • est31 3 hours ago
        You have a lot of control over LLM quality. There is different models available. Even with different effort settings of those models you have different outcomes.

        E.g. look at the "SWE-Bench Pro (public)" heading in this page: https://openai.com/index/introducing-gpt-5-4/ , showing reasoning efforts from none to high.

        Of course, they don't learn like humans so you can't do the trick of hiring someone less senior but with great potential and then mentor them. Instead it's more of an up front price you have to pay. The top models at the highest settings obviously form a ceiling though.

        • kraemahz 2 hours ago
          You also have control over the workflow they follow and the standards you expect them to stick through, through multiple layers of context. Expecting a model to understand your workflow and standards without doing the effort of writing them down is like expecting a new hire to know them without any onboarding. Allowing bad AI code into your production pipeline is a skill issue.
        • MeetingsBrowser 30 minutes ago
          Imagine you opened a job posting and had all applicants complete SWE-bench.

          Ignoring the useless/unqualified candidates and models, human applicants have a much wider range of talent for you to choose from than the top models + tooling.

          The frontier models + tooling are, in the grand scheme of things, basically equivalent at any given moment.

          Humans can be just as bad as the worst models, but models are no where near as good as the best humans.

      • tossandthrow 2 hours ago
        What theory is that?

        My experience is the absolute opposite. I am much more in control of quality with Ai agents.

        I am never letting junior to midlevels into my team again.

        In fact, I am not sure I will allow any form of manual programming in a year or so.

        • MeetingsBrowser 2 hours ago
          > I am never letting junior to midlevels into my team again

          Exactly. You control the quality of the people in your team. You can train, fire, hire, etc until you get the skill level you want.

          You have effectively no control over the quality of the output from an LLM. You get what the frontier labs give you and must work with that.

          • tossandthrow 1 hour ago
            That is not correct.

            It is much easier to control quality of an Ai than of inexperienced developers.

            • MeetingsBrowser 36 minutes ago
              I think we are talking past each other.

              > I am never letting junior to midlevels into my team again

              My point is, you control the experience level of the engineers on your team. The fact that you can say you won't let junior or midlevels on your team proves that.

              You do not have that level of control with LLMs. Anthropic and OpenAI are roughly the same quality at any given time. The rest are not useful.

        • DrJokepu 2 hours ago
          Eh. You want a good mix of experience levels, what really matters is everyone should be talented. Less experienced colleagues are unburdened by yesterday’s lessons that may no longer be relevant today, they don’t have the same blind spots.

          Also, our profession is doomed if we won’t give less experienced colleagues a chance to shine.

          • tossandthrow 1 hour ago
            Our profession is likely doomed not because we don't train people, but by the lack of demand
    • ChiefTinkeer 2 hours ago
      I think this is a very good point. We have a natural bias toward human output as there is an illusion of full control - in reality even just from a solo dev perspective you've still got a load of hidden illogical persuasions that are influencing your code and how you approach a problem. AI has its own biases that come out of the nature its training on large unknowable data sets, but I'd argue the 'black box' thinking that comes out that isn't too different to the black box of the human mind. That's not at all to say that AI isn't worse (even if quicker) than top developer talent today writing handwritten code - just that the barrier to getting that level of quality isn't as insurmountable as it might appear.
    • Spooky23 1 hour ago
      It absolutely is. I did some consulting work for an environment where they have to churn out code to meet certain unchanging schedules, usually you can dumb down the process to make it more deterministic.

      These guys had to manage very complex calculation engine based on we’ll just let it changes every year had to be correct had to be delivered by a certain date every year.

      They had an army (100-200 people depending on various factors) of marginally skilled coding drones that were able to turn out the Java, COBOL or whatever it was predictably on that schedule without necessarily understanding any of the big picture or have any having any hope of so. Basically a software factory. There was about a dozen people who actually understood everything.

    • nkrisc 1 hour ago
      Only if you consider generative AI and human beings to be effectively equivalent.

      Being a project manager is more or less something humans have been doing since the dawn of time.

      Generative AI takes money as input and gives some output. If you don’t like the output, more money goes in. It’s far more akin to gambling than organizing human labor.

    • QuantumGood 3 hours ago
      Framing anything with a common blanket concept usually fails to apply the same framing to related areas. A lot of things include some gambling, you need to compare how it was also 'gambling' before, and how 'not using AI' is also 'gambling', etc.

      As @m00x points out "coding is gambling on slot machines, managing developers is betting on race horses."

    • krupan 2 hours ago
      I ssk an AI to play hangman with me and looked at it's reasoning. It didn't just pick a secret word and play a straightforward game of hangman. It continually adjusted the secret word based on the letters I guessed, providing me the "perfect" game of hangman. Not too many of my guesses were "right" and not too many "wrong" and I after a little struggle and almost losing, I won in the end.

      It wasn't a real game of hangman, it was flat out manipulation, engagement farming. Do you think it's possible that AI does that in any other situations?

      • lcampbell 1 hour ago
        The reasoning generally isn't kept in the context, so after choosing the secret word in the first reasoning block, the LLM will have completely forgotten it in the second and subsequent requests.

        So, it technically didn't change the secret word so much as it was trying to infer what its own secret word might have been, based on your guesses.

        • mh- 7 minutes ago
          Exactly. The following will work, assuming you're using a model and frontend that supports it:

          > Let's play hangman. Just pick a 3 letter word for now, I want to make sure this works. Pick the secret word up front and make sure to write the secret word and game state in a file that you'll have access to for the rest of the session, since you won't remember what word you chose otherwise.

          This was Opus 4.6 in Claude desktop, fwiw.

          Note: I didn't bother experimenting with whether it worked without me explicitly telling it that it should record the game state to a file.

    • runarberg 3 hours ago
      I don‘t think so. A project manager can give feedback, train their staff, etc. An AI coding model is all you get, and you have to wait until your provider trains a new model before you might see an improvement.
    • ModernMech 2 hours ago
      That says more about how you see developers than whether or not managers are in a sense gamblers.
    • ares623 3 hours ago
      This must be it. So many of our colleagues have been burnt by bad coworkers that they would rather burn everything down than spend another day working with them.
    • rvz 3 hours ago
      > AI models are non-deterministic, and humans are also non-deterministic. You could assign the same task to two different developers and end up with entirely different results.

      Except, one can explain themselves (humans) and their actions can be held to account in the case of any legal issue whereas an AI cannot; making such an entity completely unsuitable for high risk situations.

      This typical AI booster comparison has got to stop.

      • tossandthrow 2 hours ago
        Love that you needed to make it clear that it is humans that can explain themselves..

        Employees can only be held accountable with severe malice.

        There is a good chance that the person actually responsible (eg. The ceo or someone delegated to be responsible) will soon prefer to have AIs do the work as their quality can be quantified.

      • thunky 2 hours ago
        > Except, one can explain themselves (humans) and their actions can be held to account in the case of any legal issue whereas an AI cannot

        You "own" the software it creates which means you're responsible for it. If you use AI to commit crimes you'll go to jail, not the AI.

    • underlipton 3 hours ago
      As a human, you generally have the opportunity make decent headway in understanding the other humans that you're working with and adjusting your instructions to better anticipate the outputs that they'll return to you. This is almost impossible with AI because of a combination of several factors:

      >You are not an AI and do not know how an AI "thinks".

      >Even if you come to be able to anticipate an AI's output, you will be undermined by the constant and uncontrollable update schedule imposed on you by AI platforms. Humans only make drastic changes like this under uncommon circumstances, like when they're going through large changes in their life, not as a matter of course.

      >However, without this update schedule, problems that were once intractable will likely stay so forever. Humans, on the other hand, can grow without becoming completely unpredictable.

      It's a Catch-22. AI is way closer to gambling.

  • FL4TLiN3 2 hours ago
    In my corner of the world, average software developers at Tokyo companies, not that many people are actually using Claude Code for their day-to-day work yet. Their employers have rolled it out and actively encourage adoption, but nobody wants to change how they work.

    This probably won't surprise anyone familiar with Japanese corporate culture: external pressure to boost productivity just doesn't land the same way here. People nod, and then keep doing what they've always done.

    It's a strange scene to witness, but honestly, I'm grateful for it. I've also been watching plenty of developers elsewhere get their spirits genuinely crushed by coding agents, burning out chasing the slot machine the author describes. So for now, I'm thankful I still get to see this pastoral little landscape where people just... write their own code.

    • cyanydeez 1 hour ago
      y/eah, I here all the podcasters describing how they're building things; and when it comes to something like translating a package from one language to another and that package has tests, it's a meandering but fruitful adventure. Because, of course, when you can sane tests you can always loop through. I did this with a package dependency I had, and it got 90% of the way there, but then one of the tests I expanded upon just refused. It ended up seeming like it was just a failed dependency.

      But where there are no tests, and you're the one defining whats correct, you're definitely encroaching on the slot machine hoping it'll spit out something faster than you could do it yourself.

      Then there's some vague unease in whether spending the time to prompt will actuall result in a properly integrated software in a large existing code base with idiosyncratic code use.

      Overall, I don't see the ROI business gets from forcing people into these tools; however, as an individual, it's definitely worth understanding what they can do. Mostly, I see the efficient copy/past/find/replace of existing code to be very good.

  • Terr_ 3 hours ago
    I'd emphasize that prompting LLMs to generate code isn't just metaphorical gambling in the sense of "taking a risk", the scary part is the more-literal gambling involving addictive behaviors and how those affect the way the user interacts with the machine and the world.

    Heck, this technology also offers a parasocial relationship at the same time! Plopping tokens into a slot-machine which also projects a holographic "best friend" that gives you "encouragement" would fit fine in any cyberpunk dystopia.

    • RhythmFox 1 hour ago
      Having used agents some I think 'addictive behavior' is really the closest thing to the feeling it gives me as well. I don't find it engaging my critical thinking brain, and in fact it often subverts that in favor of 'get the next dopamine hit faster' behavior (ie just rerun it, leading to the metaphor the OP is using). It takes a conscious effort for me to get back out of that cycle and start thinking of the fine details of what the code really does, or why I wanted it to do that in the first place. I have called it 'smoking vibes' and 'chasing rAInbows' in my sillier moments. It really does feel good... too good :P
    • interestpiqued 2 hours ago
      I think AI literally makes even being wrong feel like getting something done. And that is the addictive part for people.
      • rsoto2 2 hours ago
        Look at all this text I have! It can't be worthless right?!
      • cyanydeez 1 hour ago
        "Near-Miss" effect: https://harprehab.com/blogs/the-psychology-of-risk-why-gambl...

        I believe that's the strongest pattern in LLM gambling. Was listening the Syntax and they described that "Even though theLLM did it wrong 4 times, that 5th time could be right, so why not just go!"; paraphrased of course.

        It also explains the meta-LLM business, where all these CEO types put in some question and because the LLM just knows all these words, they believe it's valuable because it's "almost" correct, even when that last correction might be forever elusive because these machines arn't thinking, they're patterning a highly regularized language beneath the more loose descriptions.

        There'll definitely be a winner in the AI bubble, but it'll be seen after it pops.

      • Terr_ 2 hours ago
        [dead]
  • copypaper 3 hours ago
    You got to know when to Ship it,

    Know when to Re-prompt,

    Know when to Clear the Context,

    And know when to RLHF.

    You never trust the Output,

    When you’re staring at the diff view,

    There’ll (not) be time enough for Fixing,

    When the Tokens are all spent.

    • koolba 3 hours ago
      > When you’re staring at the diff view,

      Bold assumption that people are looking at the diffs at all. They leave that for their coworkers agents.

      • zephen 2 hours ago
        Will the diffs be small enough for people to even usefully wade through them?
    • sedawkgrep 3 hours ago
      You're a gamblin' man, I see...
      • niccl 2 hours ago
        thank you. I knew there was something I was missing
    • krupan 2 hours ago
      I really hope that was your creativity and not AI
  • dzink 3 hours ago
    It’s variable rewards and even with large models the same question can lead to dramatically different answers. Possibly because they route your request through different models. Possibly because the model has more time to dig through the problem. Nonetheless we have some illusion of control over the output (you we wouldn’t be playing it) but it is just the quality of the model itself that leads to better outcomes - not your input. If you can’t let go of the feeling thought, it’s definitely addictive. And as I look back, it’s a fast iteration on the building cycle we had before AI. But the brain really likes low latency - it is addicted to the fast reward for its actions. So AI, if it gets fast enough (sub 400ms) it will likely become irreversibly addictive to humans in general, as the brain will see is at part of itself. Hope it has our interest at heart by then.
    • markhahn 2 hours ago
      This (variable rewards -> gambling, illusion of control) is really important.

      I'm not an expert in the psych/neuro literature on addiction, but I suspect latency isn't that critical. But is that just because it's things like fruit-machines that have been studied? Gambling (poker, racehorses) are quite long-latency. OTOH, scrolling is closer to 400ms, and that's certainly the modern addition...

    • krupan 2 hours ago
      Well said! My only qualm with this is saying you hope "it" has our interests at heart. "It" is a machine made by humans that work for corporations. I would correct your hope to, "I hope they have our interest at heart by then."
    • zephen 2 hours ago
      This is being overlooked, downplayed, or simply not understood, by many commenters.

      It is exactly like the proverbial monkey or rat pressing a bar for a food pellet to come out.

      If the pellet unerringly drops, and is always tasty and nutritious, the rat stops when it's no longer hungry.

      Otherwise, an inordinate amount of time is spent pressing the bar.

  • minimaxir 4 hours ago
    The gambling metaphor often applied to vibecoding implies that the outcome cannot be fully controlled or influenced, such as a slot machine. Opus 4.5 and beyond show that it not only can be very much can be influenced, but also it can give better results more consistently with the proper checks and balances.
    • Retr0id 4 hours ago
      Poker is a skill-based game where your actions influence your success, but many people who play it are gambling.
      • bensyverson 3 hours ago
        And that's why poker is a poor metaphor for agentic coding.
        • deepfriedrice 3 hours ago
          It's the perfect metaphor? Playing correctly/optimally is +EV. But nobody starts there, and many people don't ever get there.

          The main difference is that you're exploiting your own weaknesses, rather than others'. Limitations in typing speed, information gathering, pattern recognition.

          • bensyverson 3 hours ago
            In that case, couldn't you substitute painting, horseback riding or knitting? Nothing about poker has anything to do with agentic coding except "it's something you can learn."
            • deepfriedrice 2 hours ago
              In poker some people are gambling. Some may be self-ware, but many aren't and misunderstand why they win or lose. Poker inconsistently and unreliably rewards gambling, much like vibe-coding.
      • bigstrat2003 2 hours ago
        Poker has elements of both luck and skill. The luck element + wagering money is what makes it gambling.
        • Retr0id 2 hours ago
          On a long enough timeframe, the luck averages out.
      • c_e 3 hours ago
        everybody who's playing poker is gambling, skilled or not.
        • throwmeaway820 3 hours ago
          without a rigorous definition of "gambling", such discussions are pointless
    • jatins 3 hours ago
      Yeah, I don't think the metaphor applies exactly but I definitely see similarities from my personal experience

      1/ Dependency -- Once I got used to agentic coding, I almost always reached out to it even for small changes (e.g. update a yaml config)

      2/ Addiction -- In the initial euphoria phase, many people experience not wanting to "waste" any time agent idle and they'd try to assign AI agents task before they go to sleep

      3/ You trust your judgement less and less as agent takes over your code

      4/ "Slot machine" behavior -- running multiple AI agents parallel on same task in hope of getting some valuable insight from either

      5/ Psychosis -- We have all met crypto traders who'd tell you how your 9-5 is stupid and you could be making so much trading NFTs. Social media if full of similar anecodotes these days in regards to vibecoding with people boasting their Claude spend, LOC and what not

    • davidkhess 3 hours ago
      One way it works is if you think of cognitive debt as the "house". As in "the house always wins".
    • ambicapter 3 hours ago
      Slot machines have very controlled results. They are regulated to a high precision of reliability.
      • Terr_ 3 hours ago
        I don't think that difference matters to the comparison.

        It's not an inherent feature to slot machines, it's something we enforce because people got angry about the outcomes (i.e. fraud) when they didn't operate that way.

        It doesn't matter because a dodgy slot-machine is still a slot machine, and the person using it would still be a gambler.

    • Terr_ 3 hours ago
      > The gambling metaphor often applied to vibecoding implies that the outcome

      The important part of the not-really-a-metaphor is the relationship between user and machine, and how it affects the user's mind.

      What the machine outputs on "wins" doesn't matter as much, addictive gambling can still happen even when the payouts are dumb.

    • reaperducer 3 hours ago
      it can give better results more consistently with the proper checks and balances.

      You can get more consistent results from a slot machine with a bunch of magnets and some swift kicks. It's still gambling.

  • thisisbrians 3 hours ago
    It is and will always be about: 1) properly defining the spec 2) ensuring the implementation satisfies said spec
    • nickjj 3 hours ago
      > properly defining the spec

      Why do you often need to re-prompt things like "can you simplify this and make it more human readable without sacrificing performance?". No amount of specification addresses this on the first shot unless you already know the exact implementation details in which case you might as well write it yourself directly.

      I often have to put in a prompt like this 5-10 times before the code resembles something I'd even consider using as a 1st draft base to refactor into something I would consider worthy of being git commit.

      I sometimes use AI for tiny standalone functions or scripts so we're not talking about a lot of deeply nested complexity here.

      • seanmcdirmid 3 hours ago
        > I often have to put in a prompt like this 5-10 times before the code resembles something I'd even consider using as a 1st draft base to refactor into something I would consider worth of being git commit.

        Are you stuck entering your prompts in manually or do you have it setup like a feedback loop like "beautify -> check beauty -> in not beautiful enough beautify again"? I can't imagine why everyone things AIs can just one shot everything like correctness, optimization, and readability, humans can't one shot these either.

        • nickjj 3 hours ago
          I do everything manually. Prompt, look at the code, see if it works (copy / paste) and if it works but it's written poorly I'll re-prompt to make the code more readable, often ending with me making it more readable without extra prompts. Btw, this isn't about code formatting or linting. It's about how the logic is written.

          > I can't imagine why everyone things AIs can just one shot everything like correctness, optimization, and readability, humans can't one shot these either.

          If it knows how to make the code more readable and / or better for performance by me simply asking "can you make this more readable and performant?" then it should be able to provide this result from the beginning. If not, we're admitting it's providing an initial worse result for unknown reasons. Maybe it's to make you as the operator feel more important (yay I'm providing feedback), or maybe it's to extract the most amount of money it can since each prompt evaluates back to a dollar amount. With the amount of data they have I'm sure they can assess just how many times folks will pay for the "make it better" loop.

          • seanmcdirmid 3 hours ago
            Why do you orchestrate the AI manually? You could write a BUILD file that just does it in a loop a few times, or I guess if you lack build system interaction, write a python script?

            > If it knows how to make the code more readable and / or better for performance by me simply asking "can you make this more readable and performant?" then it should be able to provide this result from the beginning.

            This is the wrong way to think about AI (at least with our current tech). If you give AI a general task, it won't focus its attention at any of these aspects in particular. But, after you create the code, if you use separate readability and optimization feedback loops where you specifically ask it to work on those aspects of the code, it will do a much better job.

            People who feel like AI should just do the right thing already without further prompting or attention focus are just going to be frustrated.

            > Btw, this isn't about code formatting or linting. It's about how the logic is written.

            Yes, but you still aren't focusing the AI's attention on the problem. You can also write a guide that it puts into context for things you notice that it consistently does wrong. But I would make it a separate pass, get the code to be correct first, and then go through readability refactors (while keeping the code still passing its tests).

      • giancarlostoro 3 hours ago
        There's two secret sauces to making Claude Code your b* (please forgive me future AI overlords), one is to create a spec, the other is to not prompt merely "what" you want and only what you want, but what you want, HOW you want it done (you can get insanely detailed or just vague enough), and even in some cases the why is useful to know and understand, WHO its for sometimes as well. Give it the context you know, don't know anything about the code? Ask it to read it, all of it, you've got 1 million tokens, go for it.

        I have one shot prompted projects from empty folder to full feature web app with accounts, login, profiles, you name it, insanely stable, maybe and oops here or there, but for a non-spec single prompt shot, that's impressive.

        When I don't use a tool to handle the task management I have Claude build up a markdown spec file for me and specify everything I can think of. Output is always better when you specify technology you want to use, design patterns.

    • QuadrupleA 2 hours ago
      Side note, everyone's talking about having AI agents "conform to the spec" these days. Am I in my own bubble, or - who the hell these days gets The Spec as a well-formed document? Let alone a good document, something that can be formally verified, thouroughly test-cased, can christen the software "complete" when all its boxes are ticked, etc.?

      This seems like 1980's corporate waterfall thinking, doesn't jibe with the messy reality I've seen with customers, unclear ideas, changing market and technical environments, the need for iteration and experimentation, mid-course correction, etc.

      • Aurornis 1 hour ago
        > who the hell these days gets The Spec as a well-formed document?

        The PMs asked ChatGPT to write a well-formed spec.

        Sadly, true in too many companies right now.

        I do agree with your general point that The Spec can become a crutch for washing your hands of any responsibility for knowing the product, the goals, the company's business, and other contexts. I like to defuse these ideas by reminding the engineers that The Spec is a living document and they are partially responsible for it, too. Once everyone learns that The Spec isn't a crutch for shifting all blame to the product manager, they become more involved in making sure it's right.

    • krupan 2 hours ago
      Good sir, have you heard the Good Word of the Waterfall development process? It sounds like that's what you are describing
    • bwestergard 3 hours ago
      That can't be the whole story, right? Because there are an arbitrarily large number of (e.g.) Rust programs that will implement any given spec given in terms of unit tests, types, and perhaps some performance benchmarks.

      But even accounting for all these "hard" constraints and metrics, there are clearly reasons to prefer some possible programs over others even when they all satisfy the same constraints and perform equally on all relevant metrics.

      We do treat programs as efficient causes[1] of side effects in computing systems: a file is written, a block of memory is updated, etc. and the program is the cause of this.

      But we also treat them as statements of a theory of the problem being solved[2]. And this latter treatment is often more important socially and economically. It is irrational to be indifferent to the theory of the problem the program expresses.

      [1]: https://en.wikipedia.org/wiki/Four_causes#Efficient

      [2]: https://pages.cs.wisc.edu/~remzi/Naur.pdf

      • MeetingsBrowser 3 hours ago
        > there are clearly reasons to prefer some possible programs over others even when they all satisfy the same constraints

        Maintainability is a big one missing from the current LLM/agentic workflow.

        When business needs change, you need to be able to add on to the existing program.

        We create feedback loops via tests to ensure programs behave according to the spec, but little to nothing in the way of code quality or maintainability.

    • rawgabbit 3 hours ago
      I had a CIO tell me 15 years ago with Agile I was wasting my time with specs and design documents.
      • vidarh 3 hours ago
        I was in a call just today where specs were presented as a new thing.
    • raizer88 3 hours ago
      AI: "Yes, the specs are perfectly clear and architectural standards are fully respected."

      [Imports the completely fabricated library docker_quantum_telepathy.js and calls the resolve_all_bugs_and_make_coffee() method, magically compiling the code on an unplugged Raspberry Pi]

      AI: "Done! The production deployment was successful, zero errors in the logs, and the app works flawlessly on the first try!"

    • ambicapter 3 hours ago
      Then pulling the lever until it works! You can also code up a little helper to continuously pull the lever until it works!
      • SV_BubbleTime 3 hours ago
        We have a monkeys and typewriters thing for this already.

        Just instead of hitting keys, they’re hitting words, and the words have probability links to each other.

        Who the hell thinks this is ready to make important decisions?

    • dgxyz 3 hours ago
      Well it’s more how much we care about those.

      Which with the advent of LLMs just lowered our standards so we can claim success.

    • CodingJeebus 3 hours ago
      Personally, I get a huge rush of dopamine seeing LLMs build out complex features very quickly to the point that it will keep me up all night wanting to push further and further.

      That's where the gambling metaphor really resonates. It's not whether or not the output is correct, I've been building software for many years and I know how direct LLMs pretty well at this point. But I'm also an alcoholic in recovery and I know that my brain is wired differently than most. And using LLMs has tested my ability to self-regulate in ways that I haven't dealt with since I deleted social media years ago.

      • natpalmer1776 3 hours ago
        It also doesn’t help that producing features is also wired to a sense of monetary compensation. More-so if you’re building a product to sell that might finally be your ticket to whatever your perception of socio-economic victory is.
        • CodingJeebus 3 hours ago
          That's definitely part of it, sure. I also just get a cosmic kick out thinking about the possibilities that this technology unlocks and that thinking can spiral in all sorts of unhealthy ways.
      • acedTrex 3 hours ago
        > Personally, I get a huge rush of dopamine seeing LLMs build out complex features very quickly

        I dont think i've read a sentence on this website i can relate to less.

        I watch the LLM build things and it feels completely numb, i may as well be watching paint dry. It means nothing to me.

        • zer00eyz 3 hours ago
          I wonder if the difference here is age/experience or what you're working on/in.

          When I was 20, writing code was interesting, by the time I was 28 it became "solving the problem" and then moved on to "I only really enjoy a good disaster to clean up".

          All of my time has been spent solving other peoples problems, so I was never invested in the domain that much.

          • MrScruff 2 hours ago
            Yeah, I used to enjoy writing code but after a while I realised I actually more enjoy creating tools that I (and other people) liked to use. Now I can do that really quickly even with my very limited free time, at a higher level of abstraction, but it's still me designing the tool.

            And despite the amount of people telling me the code is probably awful, the tools work great and I'm happily using them without worrying about the code anymore than I worry about the assembly generated by a compiler.

        • CodingJeebus 3 hours ago
          Trust me, I have many days where I wish I had your relationship to this. I wish it were as boring as watching paint dry. But it triggers that part of my brain that wants more, and I have to be very careful about that.
    • BurningFrog 3 hours ago
      That was always the easy part.

      The endless next steps of "and add this feature" or "this part needs to work differently" or "this seems like a bug?" or "we must speed up this part!" is where 98% of the effort always was.

      Is it different with AI coding?

  • comboy 3 hours ago
    Fascinating how HN is torn about vibe coding still. Everybody pretty much agrees that it works for some use cases, yet there is a flamewar (I mean, cultured, HN-type one) every time. People seem to be more comfortable in a binary mindset.
    • hext 1 hour ago
      If you enjoy the flamewar, check out /r/SelfHosted which has been losing it's mind over the last few months. The heavy heavy majority of that community is somehow incredibly anti AI despite the fact that the previous "spammy" posts (before ai assisted projects) were all "what is wrong with my docker compose file"??
      • ApolloFortyNine 1 hour ago
        I had to unsub from that subreddit when I saw a cool new application and the top comments were just dogging it for the signs of Claude Code (claude.md).

        This is a subreddit about selfhosting things others built for free. Honestly, often for piracy purposes. It's insane how entitled people have become.

        • hext 1 hour ago
          Absolutely. Really gross to see. Heavy majority of the complaints boil down to “I can’t blindly trust everything posted here now?” - as if they could before?? So entitled.

          Also annoys me that all of the suggestions on how to handle filtering AI demonstrate a clear lack of understanding around how agentic coding works. Like if you can’t be bothered to understand why “ban any project that uses AI” is not possible, the entire subreddit is probably above your pay grade…

    • pgwhalen 3 hours ago
      It’s just how discussion on the internet works, for basically anything at all worth discussing. It’s exhausting, but I can hardly blame HN specifically.
    • minimaxir 3 hours ago
      > Everybody pretty much agrees that it works for some use cases

      That isn't true, which is the exact reason why people have a binary mindset. More than once on Hacker News I've had people accuse me of being an AI booster just because I said I had success with agents and they did not.

    • mpalmer 1 hour ago
      For my part at least, I get the most riled up against the binary thinkers!
      • szatkus 1 hour ago
        This. A lot of people on HN acts as you can only write code manually (almost, generators and snippets are allowed, because we are used to them) or vibe coding the whole project through a WhatsApp conversation. As if there was nothing in between and the same approach should work for all kinds of projects.

        Personally I use coding agents for boring parts (I really don't enjoy putting the same piece of string to 20 different classes just to register a new component) and they work quite well, I'm going to use them for foreseeable future, because they make coding much more enjoyable for me. On the other hand I don't have an OpenClaw box burning billions of tokens weekly for me, because I usually don't have ideas that could be clearly specified.

    • zer00eyz 3 hours ago
      VIM vs Emacs vs IDE vs..., Tabs vs Spaces, Procedural vs OOP vs Functional.

      We love a good holy war for sure.

      The nuance is lost, and the conversations we should be having never happen (requirements, hiring/skills, developer experience).

  • selixe_ 1 hour ago
    I think "gambling" is a bit too strong, but there is a real shift in how we evaluate correctness. With traditional coding, you reason step by step and with AI-assisted code, you're often validating outputs after the fact.

    The risk isn't randomness per se it's over trusting something that looks correct. The skill ceiling is moving from "can you write it" to "can you reliably verify it"

    • lll-o-lll 21 minutes ago
      But “reliably verify it” was always the critical difference between high and low quality engineering efforts.

      Good programmers might have made things that “performed well”, and had “few bugs”, without this step, but it was not robust to changes over time. If we end up in a place where every project has solid automated verification, perhaps things get better overall.

  • some_random 4 hours ago
    How often do you have to win before it's no longer gambling?
    • operatingthetan 3 hours ago
      Exactly. It's not gambling if you win most of the time. This is like saying driving a car is gambling. I mean sure, I guess if you think any amount of risk equals gambling.
    • Retr0id 4 hours ago
      I don't know where I'd draw the line personally, but wherever you draw it there's a problem. If you give increasingly more advanced tasks to it, you will eventually cross the line.
      • margalabargala 4 hours ago
        How is this any different from assigning increasingly more advanced tasks to an employee?
    • tonymet 3 hours ago
      we're winning so much we started complaining "I can't handle so much winning"
      • flaterkk 25 minutes ago
        Suffering from Success - Studio album by DJ Khaled ‧ 2013

        Applies here? :D

  • wolttam 1 hour ago
    I used to write code by hand.

    AI has removed some of the tedium, and freed up more of my bandwidth to think about the problems I’m trying to solve and what the actual best ways to solve those problems are.

    Only once I have a good feel for the problem I am solving do I go to the AI for help implementing.

    My style of prompting usually leads to code that is very close to what I would have manually typed. I review it and tweak it until it is effectively identical to what I would have typed.

    The speed up is significant. YMMV.

  • darrinm 2 hours ago
    I hear it a lot but this gambling analogy breaks when you look at actual outcomes. If you went to Vegas and after a few pulls on a one-armed bandit could _reliably_ walk away with the jackpot we wouldn’t even call it gambling anymore.
  • cmiles8 3 hours ago
    It’s like any powerful tool. If you use it right it’s amazing. If you get careless or don’t watch it closely you’ll get hurt really badly.

    Overall I’m a fan, but yes there are things to watch for. It doesn’t replace skilled humans but it does help skilled humans work faster if used right.

    The labor replacement story is bullshit mostly, but that doesn’t mean it’s all bad.

  • jsLavaGoat 3 hours ago
    Everything is "fast, cheap, good--pick two." This is no different.
    • smlacy 3 hours ago
      I like the analogy but which 2 is AI coding?

      Fast & Cheap (but not Good?) - I wouldn't really say that AI coding is "cheap"

      Cheap & Good (but not Fast) - Again, not really "cheap"

      Fast & Good (but not Cheap) - This seems like maybe where we're at? Is this a bad place?

      • flaterkk 23 minutes ago
        It's hitting all three, right _now_.

        Eventually, it will be just Fast and Good. It won't be cheap, as companies start moving towards profitability.

        Remember when Uber was super cheap? I do. They're fast and good though.

      • ambicapter 3 hours ago
        The proper idiom is "You can only pick two". It doesn't say that everything is two of them, or even one.
      • bigstrat2003 2 hours ago
        It's not cheap or good, it's just fast.
        • jsLavaGoat 2 hours ago
          It's fast. It's cheap compared to employees. It's really the latter that people are upset about.

          As for good. Well, how much software is really good? A lot of it is sewn together APIs and electron-like runtimes and 5,000 dependencies someone else wrote. Not exactly hand-crafted and artisanal.

          I'm sure everyone here's projects are the exception, but engineering is always about meeting the design requirements. Either it does or it doesn't.

  • wolandomny 2 hours ago
    Obviously the following isn't a completely original take, but it's worth stating that AI coding is just a fundamentally different job than "traditional" or "manual" coding. The previous job was to spec something out to a comfortable degree without spending all of your time on a spec when there are so many unknowns that will come up during the engineering stage. Then, the job was to engineer at a snail's pace (compared to today) and adjust the spec.

    Now, the job is to nail the spec and test HARD against that spec. Let the AI develop it and question it along the way to make sure it's not repeating itself all over the place (even this I'm sure is super necessary anymore...). Find a process that helps you feel comfortable doing this and you can get the engineering part done at lightning speed.

    Both jobs are scary in different ways. I find this way more fun, however.

    • dragonwriter 14 minutes ago
      I don’t know that that's true. Iterating on the spec and development together as in traditional agile development seems to work well with AI. The pace is different (an iteration might be hours instead of weeks) and the human role is mostly as a combined architect/analyst/lead dev/product owner, but the issue that real requirements are rarely clear before software hits the hands of users doesn't go away just because an AI wiped more of the code.
  • simonw 4 hours ago
    Assigning work to an intern is gambling: they're inherently non-deterministic and it's a roll of the dice whether the work they do will be good enough or you'll have to give them feedback in order to get to what you need.
    • lunar_mycroft 3 hours ago
      1. Interns learn. LLMs only get better when a new model comes out, which will happen (or not) regardless of whether you use them now.

      2. Who here thinks that having interns write all/almost all of your code and moving all your mid level and senior developers to exclusively reviewing their work and managing them is a good idea?

      • simonw 3 hours ago
        I don't know that the "humans learn, LLMs don't" argument holds any more with coding agents.

        Coding agents look at existing text in the codebase before they act. If they previously used a pattern you dislike and you tell them how to do differently, the next time they run they'll see the new pattern and are much more likely to follow that example.

        There are fancier ways of having them "learn" - self-updating CLAUDE.md files, taking notes in a notes/ folder etc - but just the code that they write (and can later read in future sessions) feels close-enough to "learning" to me that I don't think it makes sense to say they don't learn any more.

        • lunar_mycroft 2 hours ago
          In some ways these methods are similar to the model "learning", but it's also fundamentally different than how models are trained and how humans learn. If a human actually learns something, they're retain that even if they no longer have access to what they learned it from. And LLM won't (unless trained by the labs not to, which is out of scope). If you stop giving it the instructions, it won't know how to do the thing you were "teaching" it to do any more.
        • bigstrat2003 2 hours ago
          It is a matter of fact that LLMs cannot learn. Whether it is dressed up in slightly different packaging to trick you into thinking it learns does not make any difference to that fact.
          • simonw 2 hours ago
            Sure, LLMs can't learn. I'm saying that systems built around LLMs can simulate aspects of what we might call "learning".
    • sarchertech 3 hours ago
      That’s very true. But interns aren’t supposed to be doing useful work. The purpose of interns is training interns and identifying people who might become useful at a later date.

      I’ve never worked anywhere where the interns had net productivity on average.

      • simonw 3 hours ago
        Replace "intern" with "coworker" and my comment still holds.
        • sarchertech 2 hours ago
          It worked with interns because interns are temporary workers. It doesn’t work with coworkers because you get to know them over time, you can teach them over time, and you can pick which ones you work with to some degree.

          To come up with an analogy that works at all for AI, it would have to be something like temporary workers who code fast, and read fast, but go home at the end of the day and never return.

          You can make a lot of valuable software managing a team like that working on the subset of problems that the team is a good fit for. But I wouldn’t work there.

    • capitalsigma 1 hour ago
      People don't write blog posts about how they wake up at 3AM to assign new tasks to their intern, nor do they build "orchestration frameworks" that involve N layers of interns passing tasks down between eachother
    • james2doyle 3 hours ago
      The only similarity is that they both say "you’re absolutely right" when you point out their obvious mistakes
    • sidrag22 3 hours ago
      exactly where my mind went as well. There aren't really levels to pulling a lever on a slot machine, other than the ability for each pull to result in more "plays" of the same potential outcome.

      The reason i think this metaphor keeps popping up, is because of how easy it is to just hit a wall and constantly prompt "its not working please fix it" and sometimes that will actually result in a positive outcome. So you can choose to gamble very easily, and receive the gambling feedback very quickly unlike with an intern where the feedback loop is considerably delayed, and the delayed interns output might simply be them screaming that they don't understand.

    • throw4847285 3 hours ago
      There are two major mistakes here.

      The first is equating human and LLM intelligence. Note that I am not saying that humans are smarter than LLMs. But I do believe that LLMs represent an alien intelligence with a linguistic layer that obscures the differences. The thought processes are very different. At top AI firms, they have the equivalent of Asimov's Susan Calvin trying to understand how these programs think, because it does not resemble human cognition despite the similar outputs.

      The second and more important is the feedback loop. What makes gambling gambling is you can smash that lever over and over again and immediately learn if you lost or got a jackpot. The slowness and imprecision of human communication creates a totally different dynamic.

      To reiterate, I am not saying interns are superior to LLMs. I'm just saying they are fundamentally different.

      And, if we're being honest, the way people talk about interns is weirdly dehumanizing, and the fact that they are always trotted out in these AI debates is depressing.

      • simonw 2 hours ago
        > And, if we're being honest, the way people talk about interns is weirdly dehumanizing, and the fact that they are always trotted out in these AI debates is depressing.

        Yeah, I agree with that.

        That thought crossed my mind as I was posting this comment, but I decided to go with it anyway because I think this is one of those cases where I think the comparison is genuinely useful.

        We delegate work to humans all the time without thinking "this is gambling, these collaborators are unreliable and non-deterministic".

        • throw4847285 2 hours ago
          True. I think that's why my second point is much stronger. The main issue is not delegation, or human vs machine intelligence. It's the instant feedback.

          Human collaboration has always been slow and messy. Large tech companies have always looked for ways to speed up the feedback loop, isolating small chunks of work to be delegated to contractors or offshore teams. LLMs have supercharged that. If you have a skilled prompter you can get to a solution of good enough quality by rapidly iterating, asking for output, correcting the prompt, etc.

          That is good in that if you legitimately have good ideas and the block is execution speed. But if the real blocker is elsewhere, it might give you the illusion of progress.

          I don't know. Everything is changing too fast to diagnose in real time. Let's check back in a year.

    • skepticATX 3 hours ago
      You generally don’t assign work to an intern just for the output, though.
    • Fellshard 3 hours ago
      An intern can be taught. If you try to 'teach' a craps table, they'll drag you out of the casino.
    • bluefirebrand 3 hours ago
      Drawing parallels between AI and interns just shows you're a misanthrope

      You should value assigning tasks to human interns more than AI because they are human

    • mathrawka 3 hours ago
      As someone who has worked with interns for year, expect feedback and reiterations always, be surprised if they get it the first time... which merits feedback as well!

      But looks like the intern mafia is bombarding you with downvotes.

  • cjlm 1 hour ago
    Totally agree, wrote something similar last year: https://cjlm.ca/posts/it-feels-like-gambling/
  • rustyhancock 3 hours ago
    Life is full of variable reward schemes. Probably why we evolved to be so enamoured by them.

    Sometimes I think we put the Carr before the horse. We gamble because evolution promotes that approach.

    Yes I could go for the reliable option. But taking a punt is worth a shot if the cost is low.

    The cost of AI is low.

    What is a problem is people getting wrapped up in just one more pull of the slot machine handle.

    I use AI often. But fairly often I simply bin its reponse and get to work on my own. A decent amount of the time I can work with the response given to make a decent result.

    Sometimes, rarely, it gives me what I need right off the bat.

    • bluefirebrand 3 hours ago
      > The cost of AI is low

      If we're only talking about money spent on prompting AI, maybe. The damage to online trust is massive imo. So is the damage done by looting the commons to build them.

      Typical privatize the profits socialize the costs bullshit

    • Barrin92 1 hour ago
      >Life is full of variable reward schemes. Probably why we evolved to be so enamoured by them.

      Important to point out that that every high culture produced restrictions on exactly those behaviors, gambling was a universal vice when that concept still mattered.

      America in particular had a work culture that favored well, work and technical excellence. Now work is for suckers, thinking is for suckers, precision not worth it when you can have some machine do it half-right.

      "Yes I could go for the reliable option. But taking a punt is worth a shot if the cost is low.", might as well be the national slogan from vibe-coding to the department of defense. Even the venture capital industry that excels at slot machine sectors was itself already a slot machine.

    • hirako2000 3 hours ago
      I doubt gambling is in nature. Investments based on reason pay off. Evolution shapes for sensical moves.

      Humans invented gambling as a rigged game that mimics what's in nature, perversed for profit.

      • rustyhancock 3 hours ago
        The "natural" form of gambling is this.

        You need to collect food, do you go to where you know there are berries (low value but high likelihood of finding), or scout off to find a herd of deer? (High value but low likelihood of finding).

        Looking for deer wouldnt be walking off in a random direction. You check water holes, known clearings, known fields.

        Each of these is an operation (walk to X and look), each has a low probability of meeting a deer.

        This is a variable reward scheme.

        The result is optmize foraging practices - you mostly hunt for deer then fall back to berries. In larger groups some will gather berries some will hunt.

        Contrary to popular thought hunter and gatherer were not separate occupations.

        • hirako2000 1 hour ago
          But we don't say a hunter is out gambling. We say out hunting. That there is investment, uncertainty, risks, and invariably some randomness in the returns, it's certainly not random. Gambling for the the most if not the whole part is random.
      • glial 3 hours ago
        Broadly speaking, gambling is just making decisions without knowing the future. It's everywhere.
        • hirako2000 1 hour ago
          Gambling is taking a bet without any clue about the future.

          Nothing is done with certainty what the future will be. Yet we don't say we are constantly gambling, without some people do.

  • amw-zero 3 hours ago
    So is human coding.
  • 6thbit 23 minutes ago
    Come on now. I pull the slot machine every time I ask my coworker Digbert to work on a ticket.

    Will Digbert be able to handle it or will he pretend to handle it? Or will he handle it in a way that it will break again in six weeks and will evolve into his full time job for a year?

    If this is gambling, middle management has been gambling for too long.

    • muwtyhg 22 minutes ago
      You know there is a difference between a tool being unable to predictably accomplish its task, and asking employees to do work and them failing to do so. The accountability alone is leagues apart.
  • PaulHoule 3 hours ago
    I think somebody like Nate Silver might say “everything is gambling” if you really pressed them.

    A big theme of software development for me has been finishing things other people couldn’t finish and the key to that is “control variance and the mean will take care of itself”

    Alternately the junior dev thinks he has a mean of 5 min but the variance is really 5 weeks. The senior dev has mean of 5 hours and a variance of 5 hours.

  • yoyohello13 3 hours ago
    I was just thinking about this. I was reading those tweets about the SV party were people were going home early to “check on their agents” or the “token anxiety” people are having over whether they are optimizing their agent usage. This is all giving me addiction vibes. Especially at the end of the day it seems like there is not much to show for it.
    • ryandrake 3 hours ago
      Addiction for the mere purpose of satisfying a compulsion, rather than to achieve a reward or physical "high."
  • Retr0id 4 hours ago
    > But now either the AI can handle it or it can pretend to handle it. Frankly it's pretending both times, but often it's enough to get the result we need.

    This has been how I think about it, too. The success rates are going up, but I still view the AI as an adversary that is trying to trick me into thinking it's being useful. Often the act is good enough to be actually useful, too.

    • mjburgess 3 hours ago
      The first anthropomorphization of AI which is actually useful.
      • Retr0id 3 hours ago
        It's not even an anthropomorphization, the reward function in RLHF-like scenarios is usually quite literally "did the user think the output was good"
  • aderix 3 hours ago
    Sometimes I feel that subsidising these packages (vs cost via API) is meant to make more and more people increasingly addicted
  • 7777332215 2 hours ago
    The problem with AI coding is that you no longer own the foundational tools.
    • rsoto2 2 hours ago
      What?? Surely once these companies have locked in their Claude workflows claude wouldn't somehow raise the price. Or steal inventions like Amazon does. Surely.
      • quikoa 1 hour ago
        Surely they aren't selling subscriptions at a loss to gain market share either.
  • fittingopposite 1 hour ago
    Background image makes the website fairly hard/unpleasant to read (in mobile view)
  • post-it 3 hours ago
    > But this doesn't really resemble coding. An act that requires a lot of thinking and writing long detailed code.

    Does it? It did in the past. Now it doesn't. Maybe "add a button to display a colour selector" really is the canonical way to code that feature, and the 100+ lines of generated code are just a machine language artifact like binary.

    > But it robs me of the part that’s best for the soul. Figuring out how this works for me, finding the clever fix or conversion and getting it working. My job went from connecting these two things being the hard and reward part, to just mopping up how poorly they’ve been connected.

    Skill issue. Two nights ago, I used Claude to write an iOS app to convert Live Photos into gifs. No other app does it well. I'm going to publish it as my first app. I wouldn't have bothered to do it without AI, and my soul feels a lot better with it.

  • NickNaraghi 2 hours ago
    It's only "gambling" for now...

    The odds of success feel like gambling. 60%, or 40%, or worse. This is downstream of model quality.

    Soon, 80%, 95%, 99%, 99.99%. Then, it won't be "gambling" anymore.

    • krupan 2 hours ago
      Have you ever heard of an extrapolation like that being incorrect?
  • cadamsdotcom 1 hour ago
    > My job went from connecting these two things being the hard and reward part, to just mopping up how poorly they’ve been connected.

    That’s only half of the transition.

    The other half - and when you know you’ve made it through the “AI sux” phase - is when you learn to automate the mopping up. Give the agent the info it needs to know if it did good work - and if it didn’t do good work, give it information so it knows what to fix. Trust that it wants to fix those things. Automate how that info is provided (using code!) and suddenly you are out of the loop. The amount of code needed is surprisingly small and your agent can write it! Hook a few hundred lines of script up to your harness at key moments, and you will never see dumb AI mistakes again (because it fixed them before presenting the work to you, because your script told it about the mistakes while you were off doing something else)

    Think of it like linting but far more advanced - your script can walk the code AST and assess anything, or use regex - your agent will make that call when you ask for the script. If the script has an exit code of 2, stderr is shown to the agent! So you (via your script) can print to stderr what the agent did wrong - what line, what file, wha mistake.

    It’s what I do every day and it works (200k LOC codebase, 99.5% AI-coded) - there’s info and ideas here: https://codeleash.dev/docs/code-quality-checks

    This is just another technique to engineer quality outcomes; you’re just working from a different starting point.

  • apf6 54 minutes ago
    Hiring a human is gambling too.
  • wagwang 3 hours ago
    > I divide my tasks into good for the soul and bad for it. Coding generally goes into good for the soul, even when I do it poorly.

    Lmk how you feel when you're constantly build integrations with legacy software by hand.

  • LetsGetTechnicl 3 hours ago
    Yes, that's literally how LLM's work, they're probabilistic.
  • __MatrixMan__ 3 hours ago
    Inductive reasoning of any kind (e.g. the scientific method) is gambling.
  • halotrope 3 hours ago
    idk it works for me it build stuff that would have taken weeks in hours ymmv
  • Gagarin1917 3 hours ago
    Trying to decide whether to refinance now or not feels like gambling too. Yet it’s financially beneficial to make some bet.

    Defining “Gambling” like isn’t really helpful.

    • gs17 3 hours ago
      If I said I had a machine where I put in "tokens", watch it spin, and either get nothing or something valuable (with which I get being largely chance), you'd presume it's some kind of slot machine. The important things IMO are the random chance of getting something and being able to keep retrying so rapidly.

      You can't keep paying to play the "refinancing game" until you get a good rate (at least not like pulling the lever again and again, you have to wait a long time, you won't call the same bank again and again, and suddenly they have an amazing rate), it's a different experience and the psychology is different.

  • DiscourseFan 3 hours ago
    When a code doesn't compile, it doesn't kill anyone. But if a Waymo suddenly veers off the road, it creates a real threat. Waymos had to be safer than real human drivers for people to begin to trust them. Coding tools did not have to be better than humans for them to be adopted first. Its entirely possible for a human to make a catastrophic error. I imagine in the future, it will be more likely that a human makes such errors, just like its more likely that a human will make more errors driving a car.
    • Verdex 3 hours ago
      My understanding is that waymo has gone on the record to say that they have human operators that remotely drive the vehicle in scenarios where their automated system is confused.

      Which I assert is semantically equivalent to saying: Human drivers (even when operating at the diminished capacity of not even being present in the car) are less likely to make errors driving a car than AIs.

      • krupan 2 hours ago
        This is getting off topic but they did not say the remote humans drive the cars. The cars always drive themselves, the remote humans provide guidance when the car is not confident in any of the decisions it could make. The humans define a new route or tell the car it's ok to proceed forward
  • mika-el 1 hour ago
    depends on bet size. small scoped tasks with tight specs — agents are reliable. "build this feature" with no constraints — yeah that's gambling. I am 90% positive most agent failures I see are from vague task definitions, not model limitations. basically the fix is better scoping not better models
  • ryoshu 3 hours ago
    Like video gaming, but similar.
  • batuhandumani 1 hour ago
    Life is a gamble
  • nativeit 3 hours ago
    I have had very similar experiences. I am not a professional software developer, but have been a Linux sysadmin for over a decade, a web developer for much longer than that, and generally know enough to hack on other people’s projects to make them suit my own purposes.

    When I have Claude create something from scratch, it all appears very competent, even impressive, and it usually will build/function successfully…on the surface. I have noticed on several occasions that Claude has effectively coded the aesthetics of what I want, but left the substance out. A feature will appear to have been implemented exactly as I asked, but when I dig into the details, it’s a lot of very brittle logic that will almost certainly become a problem in future.

    This is why I refuse to release anything it makes for me. I know that it’s not good enough, that I won’t be able to properly maintain it, and that such a product would likely harm my reputation, sooner or later. What frightens me is there are a LOT of people who either don’t know enough to recognize this, or who simply don’t care and are looking for a quick buck. It’s already getting significantly more difficult to search for software projects without getting miles of slop. I don’t know how this will ultimately shake out, but if it’s this bad at the thing it’s supposedly good at, I can only imagine the kinds of military applications being leveraged right now…

  • hodder 2 hours ago
    Depending on anyone for anything is gambling.
  • dwa3592 3 hours ago
    few thoughts on this- it's not gambling if the most expected outcome actually occurs.

    It also depends on what you're coding with;

    - If you're coding with opus4.6, then it's not gambling for a while.

    - If you'r coding with gemini3-flash, then yeah.

    One thing I have noticed though is- you have to spend a lot of tokens to keep the error/hallucination rate low as your codebase increases in size. The math of this problem makes sense; as the code base has increased, there's physically more surface where something could go wrong. To avoid that you have to consistently and efficiently make the surface and all it's features visible to the model. If you have coded with a model for a week and it has produced some code, the model is not more intelligent after that week- it still has the same layers and parameters, so keeping the context relevant is a moving target as the codebase increases (and that's why it probably feels like gambling to some people).

    • Peritract 3 hours ago
      > it's not gambling if the most expected outcome actually occurs.

      > you have to spend a lot of tokens to keep the error/hallucination rate low

      Ironically, I find your comment more effective at convincing me AI coding is gambling than the original article. You're talking about it the exact same way that gamblers do about their games.

      • dwa3592 3 hours ago
        so your whole argument is that you are convinced that ai coding is gambling because according to you i am talking about it like gamblers talk about gambling?

        - Was there anymore intelligence that you wanted to add to your argument?

      • dwa3592 3 hours ago
        lol that's interesting. care to explain why?
        • dminik 3 hours ago
          I mean, the most expected outcome does mostly happen. When gambling, you are expected to lose money and you do. I'm not quite convinced that the same isn't true for vibecoding.
  • macinjosh 51 minutes ago
    I disagree. I have a successful software product that I vibe coded using claude code starting last June. It does something novel and useful that wasn't yet offered on the App Store or any app on Android.

    I am not going to say what it is because all of the AI haters will immediately flock to leave it bad reviews and overwhelm my support systems with bad faith requests (something that has already happened).

    I've been writing software for 25 years, I know what I am doing. Every bug I shipped was my fault either because I didn't test well enough or I did not possess enough platform knowledge to know myself the right way to do things. "Unknown unknowns"

    But I have also learned better ways to do things and fixed every bug using AI tools. I don't read the code. I may scan it to gain context and then tweak a single value myself, but beyond that I don't write or read code anymore.

    Its not a magical few shot prompt then reap profits machine. I just feel like a solopreneur ditch digger who just got a lease on a new CAT excavator. I can get work done faster I can also do damage faster if I am not careful.

    Beyond this concern,

  • himata4113 3 hours ago
    I really hate when people write about the AI of the past, opus 4.6 and gpt 5.4 [not as much imo, it's really boring and uncreative] have increased in capabilities so much that it's honestly mind numbing compared to what we had LESS than a year ago.

    Opus specifically from 4.1 to 4.5 was such a major leap that some take it for granted, it went from getting stuck in loops, generally getting lost constantly, needing so so much attention to keep it going to being able to get a prompt, understand it from minimal context and produce what you wanted it to do. Opus 4.6 was a slight downgrade since it has issues with respecting what the user has to say.

  • extr 3 hours ago
    I mean, this completely falls apart when you're trying to do something "real". I am building a trading engine right now with Claude/Codex. I have not written a line of code myself. However I care deeply about making sure everything works well because it's my money on the line. I have to weight carefully the prospect of landing a change that I don't fully understand.

    Sometimes I can get away with 3K LoC PRs, sometimes I take a really long time on a +80 -25 change. You have to be intellectually honest with yourself about where to spend your time.

  • mpalmer 2 hours ago
    I do not think "AI coding" - as distinct from the human who drives it - is gambling. More like a delayed footgun for the uneducated. I don't mean that disparagingly, but I do mean it literally.

        I’ve certainly been spending more time coding. But is it because it’s making me more efficient and smarter or is it because I’m just gambling on what I want to see? 
    
    Is this really a difficult question to answer for oneself? If you can't tell if you're learning anything, or getting more confident describing what you want, I would suggest that you cannot be thinking that deeply about the code you're producing.

        Am I just pulling the lever until I reach jackpot?
    
    And even then, will you know you've won?

    At the very least, a gambler knows when they have hit jackpot. Here, you start off assuming you've won the jackpot every time, and maybe there'll be an unpleasant surprise down the line. Maybe that's still gambling, but it's pretty backwards.

  • samschooler 4 hours ago
    I think there are levels to this.

    - One shot or "spray and pray" prompt only vibe coding: gambling.

    - Spec driven TDD AI vibe coding: more akin to poker.

    - Normal coding (maybe with tab auto complete): eating veggies/work.

    Notably though gambling has the massive downside of losing your entire life and life savings. Being in the "vibe coding" bucket's worse case is being insufferable to your friends and family, wasting your time, and spending $200/month on a max plan.

    • parliament32 3 hours ago
      You remind me of those guys who swear they have a "system" at the casino.
      • samschooler 3 hours ago
        I'm not saying I have a system. I'm saying there are levels to this stuff. It's not a binary "gambling" or "not gambling".
  • irarrazaval26 1 hour ago
    surprised this isn't talked about more
  • lasgawe 3 hours ago
    haha.. I agree with the points mentioned in the article. Literally every model does this. It feels like this even with skills and other buzzword files
  • anal_reactor 2 hours ago
    An idea just occurred to me: why not tell AI to code in Coq? AFAIK the selling point of that language is that if it compiles, then it's guaranteed to work. It's just that it's PITA to write code in Coq, but AI won't get annoyed and quit.
  • apitman 4 hours ago
  • tonymet 3 hours ago
    As always, scope the changes to no larger than you can verify. AI changes the scale, but not the strategy.

    Now you have more resources to test, reduce permissions scope, to build a test bench & procedure. All of the excuses you once had for not doing the job right are now gone.

    You can write 10k + lines of test code in a few minutes. What is the gamble? The old world was a bigger gamble.

  • rob_c 3 hours ago
    So.

    Is.

    Life.

    You've discovered probability, there was an 80% change of that. Roll a dice and do not pass go.

    Again. The output from llm is a probable solution, not right, not wrong.

  • CodingJeebus 3 hours ago
    For me, the feedback loop accelerating the way that AI now permits is so addictive in my day-to-day flows. I've had a really hard time stepping away from work at a reasonable hour because I get dopamine hits seeing Claude build things so fast.

    Addiction and recovery is part of my story, so I've done quite a bit of work around that part of my life. I don't gamble, but I can confidently say that using LLMs has been an incredible boost in my productivity while completely destroying my good habits around setting boundaries, not working until 2AM, etc.

    In that sense, it feels very much like gambling.

  • rvz 3 hours ago
    It is indeed gambling. You are spending more tokens hoping that the agent aligns with your desired output from your prompt. Sometimes it works, sometimes it doesn't.

    Watching vibe gamblers hooked onto coding agents who can't solve fizz buzz in Rust are given promotional offers by Anthropic [0] for free token allowances that are the equivalent in the casino of free $20 bets or free spins at the casino to win until March 27, 2026.

    The house (Anthropic) always wins.

    [0] https://support.claude.com/en/articles/14063676-claude-march...

  • luckydata 2 hours ago
    it's gambling until you learn how to set up proper harnesses then it just becomes normal administration. It's no different than running a team, humans make mistakes too, that's why we have CI pipelines, automated testing etc... AI assisted coding "JUST" requires you to be extra good at that part of the job.
  • zzzeek 3 hours ago
    coding with an LLM works if the model you are following is: you have the role of architect and/or senior developer, and you have the smartest junior programmer in the world working for you. You watch everything it does, check its conclusions, challenge it, call it out on things it didnt get quite right

    it's really extremely similar to working with a junior programmer

    so in this post, where does this go wrong?

    > I am not your average developer. I’ve never worked on large teams and I’ve barely started a project from scratch. The internet is filled with code and ideas, most of it freely available for you to fork and change.

    Because this describes a cut-and-paster, not a software architect. Hence the LLM is a gambling machine for someone like this since they lack the wisdom to really know how to do things.

    There's of course a huge issue which is that how are we going to get more senior/architect programmers in the pipeline if everyone junior is also doing everything with LLMs now. I can't answer that and this might be the asteroid that wipes out the dinosaurs....but in the meantime, if you DO know how to write from scratch and have some experience managing teams of programmers, the LLMs are super useful.

    • bigstrat2003 2 hours ago
      > it's really extremely similar to working with a junior programmer

      Right, which is why LLMs aren't useful if you actually know what you're doing. It's a drain on your time to have to carefully check everything a junior writes, but you do it because he will learn and eventually return on that investment. With an LLM, there is no such long term payoff.

  • 1970-01-01 3 hours ago
    "60% of the time, it works every time"
  • 1234letshaveatw 3 hours ago
    Is using a calculator gambling?
  • xnx 4 hours ago
    ...and the payouts are fantastic.
  • artursapek 3 hours ago
    “hiring people is gambling”
  • leontloveless 2 hours ago
    [dead]
  • devcraft41 1 hour ago
    [dead]
  • webagent255 3 hours ago
    [dead]
  • ratrace 3 hours ago
    [dead]
  • Iamkkdasari74 1 hour ago
    [dead]
  • lokimoon 3 hours ago
    h1b coding is ignorance.
  • bensyverson 3 hours ago
    This "slot machine" metaphor is played out. If you're just entering a coin's worth of information and nudging it over and over in the hopes of getting something good, that's a you problem, not a Claude problem.

    If, on the other hand, you treat it like a hyper-competent collaborator, and follow good project management and development practices, you're golden.

    • ctoth 3 hours ago
      > If, on the other hand, you treat it like a hyper-competent collaborator, and follow good project management and development practices, you're golden.

      I am consistently using 100% of my weekly $200 max plan. I know how this thing works, I know how to get value out of it, and I wish what you said were true.

      If you do all of these things? You are in a better spot. You are in a far better spot than if you hadn't! Setting up hooks to ensure notes get written? Massive win! Red-green TDD? Yes, please! But in terms of just ... well, being able to rely on the damn thing?

      https://github.com/ctoth/claude-failures

    • james2doyle 3 hours ago
      _hyper-competent collaborator who may completely make things up occasionally and will sometimes give different answers to the same question*_
      • bensyverson 3 hours ago
        So, indistinguishable from a human then
        • bigstrat2003 2 hours ago
          No. A competent human doesn't make things up, he admits ignorance. He also only very rarely changes answers he previously gave.
    • rustyhancock 3 hours ago
      Life is full of variable reward schemes. Probably why we evolved to be so enamoured by them.

      In a healthy environment. We are harmed more by being totally risk adverse. Than by accepting risk as part of life and work.

  • vermilingua 2 hours ago
    Not only is it gambling, it has the full force of the industry that built the attention market behind it. I find it extremely hard to believe that these tools have not been optimised to keep developers prompting the same way tiktok keeps people scrolling.
  • CraftingLinks 3 hours ago
    I see whole teams pushed by c- level going full in with spec driven + tdd development. The devs hate it because they are literally forbidden to touch a single line if code. but the results speak for themselves, it just works and the pressure has shifted to the product people to keep up. The whole tooling to enable this had to be worked out first. All Cursor and extreme use of a tool called Speckit, connected to Notion to pump documentation and Jira.
    • RealityVoid 3 hours ago
      > literally forbidden to touch a single line if code.

      That is extremely stupid. What does that ban get you? I reqct to this because a friend mentioned exactly this. And I was dumbfounded.

      • CraftingLinks 44 minutes ago
        I don't think it's implemented that harsh or enforced so hostile, but they have these rict procedures now on how the code is to be developed. That procedure they follow is all centered around automated code generation. So they simply... don't anymore in practice, it is not part of the job description so to speak. He wasn't happy I can tell, but also acknowledged it was working very well.
      • ryandrake 3 hours ago
        It seems like just a CxO dick measuring exercise.

        CEO1: "We allow our engineers to use AI for all work."

        CEO2: "Oh yea? We mandate our engineers use AI for at least N% of their work!"

        CEO3: "You think that's good? We mandate our engineers use AI for all code!!"

        CEO4: "Pfff, amateurs. We don't even allow our engineers to open source code editors or even look at the LLM output..."

        • CraftingLinks 51 minutes ago
          I also thought it was pushing it to the limit, but I think this is just some Founder of a successful company deciding engineering was going to transform to this way of working. A huge bet, but the implementation didn't feel amateuristic or ad hoc. Just not very pleasant for most devs to work that way. I'm sure some will look elsewhere. I know I would!
      • comboy 3 hours ago
        > That is extremely stupid. What does that ban get you?

        confidence in firing coders I presume..

        • CraftingLinks 38 minutes ago
          They are hiring "architects", or do we call them analysts. The impression is we're going back to analysts drawing those pld school UML-like diagrams etc. Also, a lot of the devs are on the brink of just quitting, because it's "not programming" anymore. So, not only will you still need devs, or people massaging those specs, you'll also need enough "product" people to keep that engine fed! If your management isn't lazy, I can see the need for growing people count will continue to rise within such companies. That doesn't mean the work will be ...satisfying for devs.
    • bigstrat2003 2 hours ago
      > but the results speak for themselves, it just works

      The results do speak for themselves, but it doesn't work.

    • rsoto2 2 hours ago
      yeah i'm not gonna be an AI company's guinea pig just because the c-suite wants to sign me up. "the results" you mean AI-psychosis and dunning-kruger syndrome?
      • CraftingLinks 56 minutes ago
        Like I said, devs don't like it. He said productivity went up 3-4x. "It works". There was no question of denying that as far as he was concerned. At the same time he was going to look for another job as it was just painful to work like that.