It just seems weird to me to throw all these stats together. Putting 75GB of logs in the same category as managing the compute for this many parallel workflows and so on seem like problems on totally different scales.
Unfortunately I didn’t really get the point of the article after being bombarded with stats, expect that the authors have an AI tool to sell.
To me that reads more like monorepo is a central point of failure and they’re scrambling to bandaid the consequence of that decision. And the bandaids aren’t gonna scale to 1000 people
I guess they’re missing whatever Google has to make their monorepo scale
interesting that they have an agent that is triggered on flaky CI failures. but it seems far too specific -- you can have pull request on many other triggers.
there doesn't seem to be any upside on having it only for flaky tests because the workflow is really agnostic to the context.
I don't really think this is at all at the quality bar for posts here. This is obviously AI-slop -- why should I invest more time reading your slop than you took to write it?
Even so, at what point do we consider the LLM-ification of all of tech a hazard? I've seen Claude go and lazily fix a test by loosening invariants. AI writes your code, AI writes your tests. Where is your human judgment?
Someone is going to lose money or get hurt by this level of automation. If the humans on your team cannot keep track of the code being committed, then I would prefer not to use your product.
> I've seen Claude go and lazily fix a test by loosening invariants.
He does pull a sneaky on you from time to time, even nowadays, in v4.6, doesn't he?
To me it's analogous to the current situation at the strait of Hormuz - it's an enormous crisis but since almost everyone has a buffer of oil stockpiles, we can pretend it's not there.
this is extremely strawman - with this your basically saying any software ever that has parts written by automation or cron jobs (even before llms) is not a product worth using? foolish.
Your response reads much more like a strawman than my original comment.
I’d challenge you to identify where in my post I said I wouldn’t use software that employs automation?
It is pretty clear I am not talking about running CI for automated and predictable signals or cron jobs. I am talking about using AI to write code and also fix tests.
It is exceedingly clear in practice that the volume of code produced by LLMs is too much for the humans using these tools to read and understand. We are collectively throwing decades of best practices out of the window in service of “velocity.” Even the FAANG shops I know of who previously had good engineering cultures seem to be endorsing the cult of: AI generated everything with stamp approval.
I think this is the first article that truly gave me “slop nausea”. So many “It’s not X. It’s Y.” Do people not realize how awful this reads? It’s not a novel either, just a few thousand words, just fucking write it and edit it yourself.
a test pass rate of 100% is a fairy tale. maybe achievable on toy or dormant projects, but real world applications that make money are a bit more messy than that.
I definitely have 100% pass rate on our tests for most of the time (in master, of course). By "most of the time" I mean that on any given day, you should be able to run the CI pipeline 1000 times and it would succeed all of them, never finding a flaky test in one or more runs.
In the rare case that one is flaky, it's addressed. During the days when there is a flaky test, of course you don't have 100% pass rate, but on those days it's a top priority to fix.
But importantly: this is library and thick client code. It should be deterministic. There are no DB locks, docker containers, network timeouts or similar involved. I imagine that in tiered application tests you always run the risk of various layers not cooperating. Even worse if you involve any automation/ui in the mix.
Obviously there are systems it depends on (Source control, package servers) which can fail, failing the build. But that's not a _test_ failure.
If the build it fails, it should be because a CI machine or a service the build depends on failed, not because an individually test randomly failed due to a race condition, timeout, test run order issue or similar
That's not what I mean. I mean that anything but 100% is a "stop the world this is unacceptable" kind of event. So if there is a day when there is a flaky test, it must be rare.
To explain further
There is a difference between having 99.99% test pass every day (unacceptable) which is also 99.99% tests passing for the year, versus having 100% tests passing on 99% of days, and 99% tests on a single bad day. That might also give 99.99% test pass rate for the year, but here you were productive on 99/100 days. So "100.0 is the normal" is what I mean. Not that it's 100% pass on 100% of days.
Having 99.98% tests pass on any random build is absolutely terrible. It means a handful of tests out of your test suite fail on almost _every single CI run_. If you have 100% test pass as a validation for PR's before merge, that means you'll never merge. If you have 100% test pass a validation to deploy your main branch that means you'll never deploy...
You want 100% pass on 99% of builds. Then it doesn't matter if 1% or 99% of tests pass on the last build. So long as you have some confidence that "almost all builds pass entirely green".
When I was at Microsoft my org had a 100% pass rate as a launch gate. It was never expected that you would keep 100% but we did have to hit it once before we shipped.
I always assumed the purpose was leadership wanting an indicator that implied that someone had at least looked at every failing test.
Unfortunately I didn’t really get the point of the article after being bombarded with stats, expect that the authors have an AI tool to sell.
Please no AI slop, write your own bloody blog posts.
I guess they’re missing whatever Google has to make their monorepo scale
there doesn't seem to be any upside on having it only for flaky tests because the workflow is really agnostic to the context.
Even so, at what point do we consider the LLM-ification of all of tech a hazard? I've seen Claude go and lazily fix a test by loosening invariants. AI writes your code, AI writes your tests. Where is your human judgment?
Someone is going to lose money or get hurt by this level of automation. If the humans on your team cannot keep track of the code being committed, then I would prefer not to use your product.
He does pull a sneaky on you from time to time, even nowadays, in v4.6, doesn't he?
To me it's analogous to the current situation at the strait of Hormuz - it's an enormous crisis but since almost everyone has a buffer of oil stockpiles, we can pretend it's not there.
I’d challenge you to identify where in my post I said I wouldn’t use software that employs automation?
It is pretty clear I am not talking about running CI for automated and predictable signals or cron jobs. I am talking about using AI to write code and also fix tests.
It is exceedingly clear in practice that the volume of code produced by LLMs is too much for the humans using these tools to read and understand. We are collectively throwing decades of best practices out of the window in service of “velocity.” Even the FAANG shops I know of who previously had good engineering cultures seem to be endorsing the cult of: AI generated everything with stamp approval.
1. Test pass rate is 99.98% is not good - the only acceptable rate is 100%.
2. Tests should not be quarantined or disabled. Every flaky test deserves attention.
In the rare case that one is flaky, it's addressed. During the days when there is a flaky test, of course you don't have 100% pass rate, but on those days it's a top priority to fix.
But importantly: this is library and thick client code. It should be deterministic. There are no DB locks, docker containers, network timeouts or similar involved. I imagine that in tiered application tests you always run the risk of various layers not cooperating. Even worse if you involve any automation/ui in the mix.
Obviously there are systems it depends on (Source control, package servers) which can fail, failing the build. But that's not a _test_ failure.
If the build it fails, it should be because a CI machine or a service the build depends on failed, not because an individually test randomly failed due to a race condition, timeout, test run order issue or similar
To explain further
There is a difference between having 99.99% test pass every day (unacceptable) which is also 99.99% tests passing for the year, versus having 100% tests passing on 99% of days, and 99% tests on a single bad day. That might also give 99.99% test pass rate for the year, but here you were productive on 99/100 days. So "100.0 is the normal" is what I mean. Not that it's 100% pass on 100% of days.
Having 99.98% tests pass on any random build is absolutely terrible. It means a handful of tests out of your test suite fail on almost _every single CI run_. If you have 100% test pass as a validation for PR's before merge, that means you'll never merge. If you have 100% test pass a validation to deploy your main branch that means you'll never deploy...
You want 100% pass on 99% of builds. Then it doesn't matter if 1% or 99% of tests pass on the last build. So long as you have some confidence that "almost all builds pass entirely green".
I always assumed the purpose was leadership wanting an indicator that implied that someone had at least looked at every failing test.
Jesus, this is why Bazel was invented.