Let's focus on the real issue here, which is that HN has apparently normalized the double hyphen in the title to an en dash--yes, an en dash, not even an em dash.
> Process monitoring at 0.1-second intervals found zero git processes around reset times.
I don’t think this is a valid way of checking for spawned processes. Git commands are fast. 0.1-second intervals are not enough. I would replace the git on the $PATH by a wrapper that logs all operations and then execs the real git.
Sure looks to me like this whole case is Claude Code chasing its own tail, failing to debug, and offering to instead generate a bug report for the user when it can't figure out a better way forward.
Maybe even submitting the bug report "agentically" without user input, if it's running on host without guardrails (pure speculation).
I think this post potentially mischaracterises what may be a one off issue for a certain person as if it were a broader problem. I'm guessing some context has been corrupted?
It's not a one off issue - it has happened to me a few times. It has once even force pushed to github, which doesn't allow branch protection for private personal projects. Here's an example.
1) claude will stash (despite clear instructions never to do so).
2) claude will use sed to bulk replace (despite clear instructions never to do so). sed replacements make a mess and replaces far too many files.
3) claude restores the stash. Finds a lot of conflicts. Nothing runs.
4) claude decides it can't fix the problem and does a reset hard.
I have this right at the top of my CLAUDE.md and it makes things better, but unlike codex, claude doesn't follow it to the letter.
NEVER USE sed TO BULK REPLACE.
NEVER USE sed TO BULK REPLACE.
NEVER USE sed TO BULK REPLACE.
*NEVER USE FORCE PUSH OR DESTRUCTIVE GIT OPERATIONS*: `git push --force`, `git push --force-with-lease`, `git reset --hard`, `git clean -fd`, or any other destructive git operations are ABSOLUTELY FORBIDDEN. Use `git revert` to undo changes instead.
you might be right, but consider the implications, if context can be corrupted in 0.1% cases and it starts showing another destructive behaviour, after creating 1000 tickets to agent, your data might be accidentally wiped off
I'd been using cursor at work for a year or two now, figured I'd try it on a personal project. I got to the point where I needed to support env-vars, and my general pattern is `source ./source-me-local-auth` => `export SOME_TOKEN="$( passman read some-token.com/password )"` ...so I wrote up the little dummy script and it literally just says: "Hrm... I think I'll delete these untracked files from the working directory before committing!" ...and goes skipping merrily along it's way.
Never had that experience in the whole time using cursor at work so I had to "take the agent to task" and ask it "WTF-mate? you'd better be able to repro that!" and then circle around the drain for a while getting an AGENTS.md written up. Not really a big deal, as the whole project was like 1k lines in and it's not like the code I'd hand-written there was "irreplaceable" but it lead to some interesting discussion w/ the AI like "Why should I have to tell you this? Shouldn't your baseline training data presume not to delete files that you didn't author? How do you think this affects my trust not just of this agent session, but all agent interactions in the future?"
Overall, this is turning out to be quite interesting technology times we're living in.
Not sure I understand, wouldn't permissions prevent this? The user runs with `--dangerously-skip-permissions` so they can expect wild behaviour. They should run with permissions and a ruleset.
Who knows whether permissions would prevent this? Anthropic's documentation on permissions (https://code.claude.com/docs/en/permissions) does not describe how permissions are enforced; a slightly uncharitable reading of "How permissions interact with sandboxing" suggests that they are not really enforced and any prompt injection can circumvent them.
I opened up Hacker News and I saw this right at the top, and I assumed it had started happening to everyone. I thought, good thing I'm not running Claude Code right now.
Stuff like this is why I think this existing sandboxing approaches fall short; this is the kind of stuff I worry about, not editing files that are already protected by git.
Some people are upset at my brave new world characterization, but yeah even as someone deriving value from Claude Code we've jumped the shark on AI in development.
Either the industry will face that reality and recalibrate, or in 20 years we're going to look back on these days like the golden age of software reliability and just accept that software is significantly more broken than it was (we've been priming ourselves for that after all)
I agree that it's worrying that we're moving more and more towards implicit and opaque state. Hiding what exactly is getting edited, very limited tooling to check what the subagents are doing exactly, setting up scheduled and recurring tasks without it being obvious etc.
It's tending more and more towards pushing the user to treat the whole thing as a pure chat interface magic black box, instead of a rich dashboard that allows you to keep precise track of what's going on and giving you affordances to intervene. So less a tool view and more magic agent, where the user is not supposed to even think about what the thing is even doing. Just trust the process. If you want to know what it did, just ask it. If you want to know if it deleted all the files, just ask it in the chat. Or don't. Caring about files is old school. Just care about the chat messages it sends you.
People aren't upset about your characterization. Catch phrases, memes, or other low qualitative comments (with no context, elaboration or personal angle) are contrary to community ethos and down voted.
Feels like just yesterday that everyone agreed that critical code is read orders of magnitude more than written, so optimizing for quick writing is wrong.
Genuinely I think that perspective is still shared by many/most engineers.
I think we’ve seen a wave of bad actors - either employees of LLM companies, or bots - pushing the idea hard of code quality not mattering and “the models will improve so fast that your code quality degrading doesn’t matter”.
I think the humans pushing that idea may even believe it, but I don’t think they’re usually employed as software engineers at regular non-AI companies, rather they have some incentive to believe it and convince others as well
While that's obviously a bug which should be fixed, having stuff just sitting around uncommitted for days (which is much longer than 10 mins) is an anti-pattern (that I used to fall into).
I’m having this weird vision of a “the matrix 3” type machine crawling around inside Microsoft’s GitHub servers central repository and just wreaking havoc.
The person who posted this bug doesn't seem like the pinnacle of software engineering. To me, this looks like either a user error or some corrupt file or context you should be able to clean up pretty quickly.
The weird part is that it's "shitting over the floor" in quite a deterministic ma nner. Every 600seconds (+- less than 0.5 seconds) doing the exact same thing.
I guess some people are upset at my brave new world characterization, but even as someone deriving value from Claude Code we've jumped the shark on AI in development.
The idea a natural request can get Claude to invoke potentially destructive actions on a timer is silly
Isn't this a natural consequence of how these systems work?
The model is probabilistic and sequences like `git reset --hard` are very common in training data, so they have some probability to appear in outputs.
Whether such a command is appropriate depends on context that is not fully observable to the system, like whether a repository or changes are disposable or not. Because of that, the system cannot rely purely on fixed rules and has to figure intent from incomplete information, which is also probabilistic.
With so many layers of probabilities, it seems expected that sometimes commands like this will be produced even if they are not appropriate in that specific situation.
Even a 0.01% failure rate due to context corruption, misinterpretation of intent, or guardrail errors would show up regularly at scale, that is like 1 in 10000 queries.
> Just by a thing being common in training data doesn't mean it will be produced.
That's not what I said at all. I never said it will be produced. I said there is some probability of it being produced.
> False, it goes against the RL/HF and other post training goals.
It is correct that frequency in training data alone does not determine outputs, and that post-training (RLHF, policies, etc.) is meant to steer the model away from undesirable behavior.
But those mechanisms do not make such outputs impossible. They just make them less likely. The underlying system is still probabilistic and operating with incomplete context.
I am not sure how you can be so confident that a probabilistic model would never produce `git reset --hard`. There is nothing inherent in how LLMs work that makes that sequence impossible to generate.
> It is meaningless to say that because the author was able to reproduce it multiple times.
I don't know how that refutes what I'm saying.
The behaviour was reproduced multiple times, so it is clearly an observable outcome, not a one-off. It just shows that the probability of `git reset --hard` is > 0 even with RLHF and post-training.
Yes, if something is reproducible and undesirable, it is a bug and RLHF can reduce it. I'm not disupting that.
My point is that fixing one bug does not eliminate the class of bugs. These systems are probabilistic and they can generate many different undesirable but plausible commands, each valid in some narrow context.
So you fix one with RLHF, but others will still appear if you give it enough time. With git commands, there is not like a system like Lean that can formally reject invalid proofs. Here, everything the model produces can be "valid" in some narrow context.
Until the underlying probabilities become negligible so much that they become practically impossible, these kinds of issues will keep surfacing even if you address individual bugs. But we aren't there yet. I hope we reach there someday. But until then, we have to recalibrate our expectations and add deterministic safeguards that lie outside the LLM.
That's interesting man, that's pretty f***' interesting. I don't think I've seen it though. I've let it run for hours making changes overnight and I only do git operations manually.
Oh, but maybe allowing it to do remote git operations is a necessary trigger.
I don’t think this is a valid way of checking for spawned processes. Git commands are fast. 0.1-second intervals are not enough. I would replace the git on the $PATH by a wrapper that logs all operations and then execs the real git.
Maybe even submitting the bug report "agentically" without user input, if it's running on host without guardrails (pure speculation).
1) claude will stash (despite clear instructions never to do so).
2) claude will use sed to bulk replace (despite clear instructions never to do so). sed replacements make a mess and replaces far too many files.
3) claude restores the stash. Finds a lot of conflicts. Nothing runs.
4) claude decides it can't fix the problem and does a reset hard.
I have this right at the top of my CLAUDE.md and it makes things better, but unlike codex, claude doesn't follow it to the letter.
NEVER USE sed TO BULK REPLACE. NEVER USE sed TO BULK REPLACE. NEVER USE sed TO BULK REPLACE.
*NEVER USE FORCE PUSH OR DESTRUCTIVE GIT OPERATIONS*: `git push --force`, `git push --force-with-lease`, `git reset --hard`, `git clean -fd`, or any other destructive git operations are ABSOLUTELY FORBIDDEN. Use `git revert` to undo changes instead.
Never had that experience in the whole time using cursor at work so I had to "take the agent to task" and ask it "WTF-mate? you'd better be able to repro that!" and then circle around the drain for a while getting an AGENTS.md written up. Not really a big deal, as the whole project was like 1k lines in and it's not like the code I'd hand-written there was "irreplaceable" but it lead to some interesting discussion w/ the AI like "Why should I have to tell you this? Shouldn't your baseline training data presume not to delete files that you didn't author? How do you think this affects my trust not just of this agent session, but all agent interactions in the future?"
Overall, this is turning out to be quite interesting technology times we're living in.
You can reduce the risk, but not drive it to zero, and at scale even very small failure rates will surface.
1. if the problem the post is suggesting is common enough, it is a bug and the extent needs to reduce (as you said)
2. if it is not common and it happens only for this user, it is not a bug and should be mostly ignored
Point is: the system is not something that is inherently a certain way that makes it unusable.
I just checked, mine also doesn‘t.
Some people are upset at my brave new world characterization, but yeah even as someone deriving value from Claude Code we've jumped the shark on AI in development.
Either the industry will face that reality and recalibrate, or in 20 years we're going to look back on these days like the golden age of software reliability and just accept that software is significantly more broken than it was (we've been priming ourselves for that after all)
It's tending more and more towards pushing the user to treat the whole thing as a pure chat interface magic black box, instead of a rich dashboard that allows you to keep precise track of what's going on and giving you affordances to intervene. So less a tool view and more magic agent, where the user is not supposed to even think about what the thing is even doing. Just trust the process. If you want to know what it did, just ask it. If you want to know if it deleted all the files, just ask it in the chat. Or don't. Caring about files is old school. Just care about the chat messages it sends you.
I think we’ve seen a wave of bad actors - either employees of LLM companies, or bots - pushing the idea hard of code quality not mattering and “the models will improve so fast that your code quality degrading doesn’t matter”.
I think the humans pushing that idea may even believe it, but I don’t think they’re usually employed as software engineers at regular non-AI companies, rather they have some incentive to believe it and convince others as well
do not share a workspace with the llm, or with anybody for that matter.
How would the llm even distinguish what was wrote by them and what was written by you ?
This whole LLM thing is a blast, huh?
You reap what you sow, finance bro.
-
I guess some people are upset at my brave new world characterization, but even as someone deriving value from Claude Code we've jumped the shark on AI in development.
The idea a natural request can get Claude to invoke potentially destructive actions on a timer is silly
https://code.claude.com/docs/en/scheduled-tasks#set-a-one-ti...
What would it cost if the /loop command was required instead of optional?
The model is probabilistic and sequences like `git reset --hard` are very common in training data, so they have some probability to appear in outputs.
Whether such a command is appropriate depends on context that is not fully observable to the system, like whether a repository or changes are disposable or not. Because of that, the system cannot rely purely on fixed rules and has to figure intent from incomplete information, which is also probabilistic.
With so many layers of probabilities, it seems expected that sometimes commands like this will be produced even if they are not appropriate in that specific situation.
Even a 0.01% failure rate due to context corruption, misinterpretation of intent, or guardrail errors would show up regularly at scale, that is like 1 in 10000 queries.
> I guess, what I'm trying to say ... is this even a bug? Sounds like the model is doing exactly what it is designed to do.
False, it goes against the RL/HF and other post training goals.
That's not what I said at all. I never said it will be produced. I said there is some probability of it being produced.
> False, it goes against the RL/HF and other post training goals.
It is correct that frequency in training data alone does not determine outputs, and that post-training (RLHF, policies, etc.) is meant to steer the model away from undesirable behavior.
But those mechanisms do not make such outputs impossible. They just make them less likely. The underlying system is still probabilistic and operating with incomplete context.
I am not sure how you can be so confident that a probabilistic model would never produce `git reset --hard`. There is nothing inherent in how LLMs work that makes that sequence impossible to generate.
I don't know how that refutes what I'm saying.
The behaviour was reproduced multiple times, so it is clearly an observable outcome, not a one-off. It just shows that the probability of `git reset --hard` is > 0 even with RLHF and post-training.
My point is that fixing one bug does not eliminate the class of bugs. These systems are probabilistic and they can generate many different undesirable but plausible commands, each valid in some narrow context.
So you fix one with RLHF, but others will still appear if you give it enough time. With git commands, there is not like a system like Lean that can formally reject invalid proofs. Here, everything the model produces can be "valid" in some narrow context.
Until the underlying probabilities become negligible so much that they become practically impossible, these kinds of issues will keep surfacing even if you address individual bugs. But we aren't there yet. I hope we reach there someday. But until then, we have to recalibrate our expectations and add deterministic safeguards that lie outside the LLM.
Oh, but maybe allowing it to do remote git operations is a necessary trigger.