33 comments

  • rented_mule 4 hours ago
    Something not unlike this happened to me when moving some batch processing code from C++ to Python 1.4 (this was 1997). The batch started finishing about 10x faster. We refused to believe it at first and started looking to make sure the work was actually being done. It was.

    The port had been done in a weekend just to see if we could use Python in production. The C++ code had taken a few months to write. The port was pretty direct, function for function. It was even line for line where language and library differences didn't offer an easier way.

    A couple of us worked together for a day to find the reason for the speedup. Just looking at the code didn't give us any clues, so we started profiling both versions. We found out that the port had accidentally fixed a previously unknown bug in some code that built and compared cache keys. After identifying the small misbehaving function, we had to study the C++ code pretty hard to even understand what the problem was. I don't remember the exact nature of the bug, but I do remember thinking that particular type of bug would be hard to express in Python, and that's exactly why it was accidentally fixed.

    We immediately started moving the rest of our back end to Python. Most things were slower, but not by much because most of our back end was i/o bound. We soon found out that we could make algorithmic improvements so much more quickly, so a lot of the slowest things got a lot faster than they had ever been. And, most importantly, we (the software developers) got quite a bit faster.

    • ameixaseca 37 minutes ago
      My experience is the exact opposite.

      This was particularly true for one of the projects I've worked with in the past, where Python was chosen as the main language for a monitoring service.

      In short, it proved itself to be a disaster: just the Python process collecting and parsing the metrics of all programs consumed 30-40% of the processing power of the lower end boxes.

      In the end, the project went ahead for a while more, and we had to do all sorts of mitigations to get the performance impact to be less of an issue.

      We did consider replacing it all by a few open source tools written in C and some glue code, the initial prototype used few MBs instead of dozens (or even hundreds) of MBs of memory, while barely registering any CPU load, but in the end it was deemed a waste of time when the whole project was terminated.

    • asveikau 3 hours ago
      > After identifying the small misbehaving function, we had to study the C++ code pretty hard to even understand what the problem was. I don't remember the exact nature of the bug, but I do remember thinking that particular type of bug would be hard to express in Python, and that's exactly why it was accidentally fixed.

      Pure speculation, but I would guess this has something to do with a copy constructor getting invoked in a place you wouldn't guess, that ends up in a critical path.

      • andrewflnr 2 hours ago
        Given the context, I'm thinking bad cache keys resulting in spurious cache misses, where the keys are built in some low-level way. Cache misses almost certainly have a bigger asymptotic impact than extra copies, unless that copy constructor is really heavy.
        • asveikau 2 hours ago
          I'm just remembering a performance issue I heard of eons ago where a sorting function comparison callback inadvertently allocated memory. It made sorting very slow. Someone said in a meeting that sorting was slow, and we all had a laugh about "shouldn't have used the bubble sort!" But it was the key comparison doing something stupid.
      • NooneAtAll3 3 hours ago
        good ol' shallow-vs-deep copy
    • asa400 4 hours ago
      Fun story! Performance is often highly unintuitive, and even counterintuitive (e.g. going from C++ to Python). Very much an art as well as a science.

      Crazy how many stories like this I’ve heard of how doing performance work helped people uncover bugs and/or hidden assumptions about their systems.

    • envguard 4 hours ago
      Agreed — the headline buries the lede. Algorithmic complexity improvements compound across all future inputs regardless of implementation language, while the WASM boundary win is more of a one-time gain. Worth noting that the statement-level caching insight generalises well: many parser-adjacent hot paths suffer the same O(N²) trap when doing repeated prefix/suffix matching without memoisation.
    • DaleBiagio 4 hours ago
      [flagged]
      • Aurornis 1 hour ago
        This comment comes from a bot account. One of the more clever ones I’ve seen that avoids some of the usual tells, but the comment history taken together exposes it.

        I hit the flag button on the comment and suggest others do too.

      • furyofantares 1 hour ago
        Thanks, Programming History Facts Bot

        I was not actually sure this one was a bot, despite LLM-isms and, sadly, being new. But you can look at the comment history and see.

      • samiv 1 hour ago
        Until at some point in a language like python all the things that allowed you write software faster start to slow you down like the lack of static typing and typing errors and spending time figuring out whether foo method works with ducks or quacks or foovars or whether the latest refactoring actually silently broke it because now you need bazzes instead of ducks. Yeah.
      • apitman 2 hours ago
        I don't think the better software part is playing out
        • ch4s3 2 hours ago
          There’s a lot of really great software out there right now, and a lot that’s terrible and I think powerful abstractions enable both.
        • remexre 2 hours ago
          you're thinking of the programs in low-level langs that survived their higher-level-lang competitors; if you plot the programs on your machine by age, how does the low quartile compare on reliability between programs written in each group
  • blundergoat 6 hours ago
    The real win here isn't TS over Rust, it's the O(N²) -> O(N) streaming fix via statement-level caching. That's a 3.3x improvement on its own, independent of language choice. The WASM boundary elimination is 2-4x, but the algorithmic fix is what actually matters for user-perceived latency during streaming. Title undersells the more interesting engineering imo.
    • azakai 5 hours ago
      O(N²) -> O(N) was 3.3x faster, but before that, eliminating the boundary (replacing wasm with JS) led to speedups of 2.2x, 4.6x, 3.0x (see one table back).

      It looks like neither is the "real win". both the language and the algorithm made a big difference, as you can see in the first column in the last table - going to wasm was a big speedup, and improving the algorithm on top of that was another big speedup.

    • nulltrace 4 hours ago
      Yeah the algorithmic fix is doing most of the work here. But call that parser hundreds of times on tiny streaming chunks and the WASM boundary cost per call adds up fast. Same thing would happen with C++ compiled to WASM.
    • socalgal2 5 hours ago
      same for uv but no one takes that message. They just think "rust rulez!" and ignore that all of uv's benefits are algo, not lang.
      • estebank 5 hours ago
        Some architectures are made easier by the choice of implementation language.
        • EdwardDiego 15 minutes ago
          UV also has the distinct advantage in dependency resolution that it didn't have to implement the backwards compatible stuff Pip does, I think Astral blogged on it. If I can find it, I'll edit the link in.

          edit wasn't Astral, but here's the blog post I was thinking of. https://nesbitt.io/2025/12/26/how-uv-got-so-fast.html

          That said, your point is very much correct, if you watch or read the Jane Street tech talk Astral gave, you can see how they really leveraged Rust for performance like turning Python version identifiers into u64s.

        • crubier 4 hours ago
          In my experience Rust typically makes it a little bit harder to write the most efficient algo actually.
          • catlifeonmars 1 hour ago
            That’s usually ok bc in most code your N is small and compiler optimizations dominate.
          • Defletter 1 hour ago
            Would you be willing to give an example of this?
      • rowanG077 5 hours ago
        That's a pretty big claim. I don't doubt that a lot of uv's benefits are algo. But everything? Considering that running non IO-bound native code should be an order of magnitude faster than python.
        • jeremyjh 4 hours ago
          Its a pretty well-supported claim. uv skips doing a number of things that generate file I/O. File I/O is far more costly than the difference in raw computation. pip can't drop those for compatibility reasons.

          https://nesbitt.io/2025/12/26/how-uv-got-so-fast.html

          • rowanG077 3 hours ago
            I don't think the article you linked supports the claim that none of UV performance improvements are related to using rust over python at all. In fact it directly states the exact opposite. They have an entire section dedicated to why using Rust has direct performance advantages for UV.
            • jeremyjh 2 hours ago
              What it says is this:

              > uv is fast because of what it doesn’t do, not because of what language it’s written in. The standards work of PEP 518, 517, 621, and 658 made fast package management possible. Dropping eggs, pip.conf, and permissive parsing made it achievable. Rust makes it a bit faster still.

              • rowanG077 2 hours ago
                Yes exactly! That quote directly disproves that all of the improvements UV has over competitors is because of algos, not because of rust.

                So the claim is not well supported at all by the article as you stated, in fact the claim is literally disproven by the article.

                • kyralis 57 minutes ago
                  This is either an overly pedantic take or a disingenuous one. The very first line that the parent quoted is

                  > uv is fast because of what it doesn’t do, not because of what language it’s written in.

                  The fact that the language had a small effect ("a bit") does not invalidate the statement that algorithmic improvements are the reason for the relative speed. In fact, there's no reason to believe that rust without the algorithmic version would be notably faster at all. Sure, "all" is an exaggeration, but the point made still stands in the form that most readers would understand it: algorithmic improvements are the important difference between the systems.

                  • rowanG077 44 minutes ago
                    I think we might be talking past each other a bit.

                    The specific claim I was responding to was that all of uv’s performance improvements come from algorithms rather than the language. My point was just that this is a stronger claim than what the article supports, the article itself says Rust contributes “a bit” to the speed, so it’s not purely algorithmic.

                    I do agree with the broader point that algorithmic and architectural choices are the main reason uv is fast, and I tried to acknowledge that, apparently unsuccessfully, in my very my first comment (“I don't doubt that a lot of uv's benefits are algo. But everything?”).

                • jeremyjh 1 hour ago
                  You are right. 99% is not 100%.
                  • rowanG077 1 hour ago
                    I don't think the article has substantive numbers. You'd have to re-implement UV in python to do that. I don't think anyone did that. It would be interesting at least to see how much UV spends in syscalls vs PIP and make a relative estimate based on that.
        • thfuran 4 hours ago
          More than one, I'd think.
    • catlifeonmars 1 hour ago
      You’re not wrong, but that win would not get as many views. It’s not clickbaity enough
    • Aurornis 6 hours ago
      > Title undersells the more interesting engineering imo.

      Thanks for cutting through the clickbait. The post is interesting, but I'm so tired of being unnecessarily clickbaited into reading articles.

    • sroussey 6 hours ago
      Yeah, though the n^2 is overstating things.

      One thing I noticed was that they time each call and then use a median. Sigh. In a browser. :/ With timing attack defenses build into the JS engine.

      • fn-mote 5 hours ago
        For those of us not in the know, what are we expecting the results of the defenses to be here?
        • sroussey 25 minutes ago
          Jitter. It make precise timings unreliable. Time the entire time of 1000 runs and divide by 1000 instead of starting and stopping 1000 timers.
    • shmerl 6 hours ago
      More like a misleading clickbait.
  • jesse__ 3 minutes ago
    This somehow reminds me of the days when the fastest way to deep copy an object in javascript was to round trip through toString. I thought that was gross then, and I think this is gross now
  • nine_k 6 hours ago
    "We rewrote this code from language L to language M, and the result is better!" No wonder: it was a chance to rectify everything that was tangled or crooked, avoid every known bad decision, and apply newly-invented better approaches.

    So this holds even for L = M. The speedup is not in the language, but in the rewriting and rethinking.

    • MiddleEndian 6 hours ago
      Now they just need a third party who's never seen the original to rewrite their TypeScript solution in Rust for even more gains.
      • nine_k 6 hours ago
        Indeed! But only after a year or so of using it in production, so that the drawbacks would be discovered.
    • azakai 5 hours ago
      You're generally right - rewrites let you improve the code - but they do have an actual reason the new language was better: avoiding copies on the boundary.

      They say they measured that cost, and it was most of the runtime in the old version (though they don't give exact numbers). That cost does not exist at all in the new version, simply because of the language.

    • baranul 5 hours ago
      Truth. You can see improvement, even rewriting code in the same language.
    • awesome_dude 3 hours ago
      I think that they were honest about that to a degree, they pointed out that one source of the speed up was caused by the python fixing a big they hadn't noticed in the C++

      Edit: fixed phone typos

  • evmar 5 hours ago
    By the way, I did a deeper dive on the problem of serializing objects across the Rust/JS boundary, noticed the approach used by serde wasn’t great for performance, and explored improving it here: https://neugierig.org/software/blog/2024/04/rust-wasm-to-js....
    • slopinthebag 4 hours ago
      Did you try something like msgpack or bebop?
  • spankalee 6 hours ago
    I was wondering why I hadn't heard of Open UI doing anything with WASM.

    This new company chose a very confusing name that has been used by the Open UI W3C Community Group for over 5 years.

    https://open-ui.org/

    Open UI is the standards group responsible for HTML having popovers, customizable select, invoker commands, and accordions. They're doing great work.

  • simonbw 1 hour ago
    Yeah if you're serializing and deserializing data across the JS-WASM boundary (or actually between web workers in general whether they're WASM or not) the data marshaling costs can add up. There is a way of sharing memory across the boundary though without any marshaling: TypedArrays and SharedArrayBuffers. TypedArrays let you transfer ownership of the underlying memory from one worker (or the main thread) to another without any copying. SharedArrayBuffers allow multiple workers to read and write to the same contiguous chunk of memory. The downside is that you lose all the niceties of any JavaScript types and you're basically stuck working with raw bytes.

    You still do get some latency from the event loop, because postMessage gets queued as a MacroTask, which is probably on the order of 10μs. But this is the price you have to pay if you want to run some code in a non-blocking way.

    • jesse__ 7 minutes ago
      This should be the top comment
  • horacemorace 1 hour ago
    I’m more of a dabbler dev/script guy than a dev but Every. single. thing I ever write in javascript ends up being incredibly fast. It forces me to think in callbacks and events and promises. Python and C (or async!) seem easy and sorta lazy in comparison.
  • sakesun 1 hour ago
    I heard a lot of similar stories in the past when I started using Python 20+ years ago. A number of people claimed their solutions got faster when develop in Python, mainly because Python make it easier to quickly pivot to experiment with various alternative methods, hence finally yield at more efficient outcome at the end.
  • jeremyjh 4 hours ago
    > The openui-lang parser converts a custom DSL emitted by an LLM into a React component tree.

    > converts internal AST into the public OutputNode format consumed by the React renderer

    Why not just have the LLM emit the JSON for OutputNode ? Why is a custom "language" and parser needed at all? And yes, there is a cost for marshaling data, so you should avoid doing it where possible, and do it in large chunks when its not possible to avoid. This is not an unknown phenomenon.

  • joaohaas 5 hours ago
    God I hate AI writing.

    That final summary benchmark means nothing. It mentions 'baseline' value for the 'Full-stream total' for the rust implementation, and then says the `serde-wasm-bindgen` is '+9-29% slower', but it never gives us the baseline value, because clearly the only benchmark it did against the Rust codebase was the per-call one.

    Then it mentions: "End result: 2.2-4.6x faster per call and 2.6-3.3x lower total streaming cost."

    But the "2.6-3.3x" is by their own definition a comparison against the naive TS implementation.

    I really think the guy just prompted claude to "get this shit fast and then publish a blog post".

  • vmsp 4 hours ago
    Not directly related to the post but what does OpenUI do? I'm finding it interesting but hard to understand. Is it an intermediate layer that makes LLMs generate better UI?
  • slopinthebag 4 hours ago
    This article is obviously AI generated and besides being jarring to read, it makes me really doubt its validity. You can get substantially faster parsing versus `JSON.parse()` by parsing structured binary data, and it's also faster to pass a byte array compared to a JSON string from wasm to the browser. My guess is not only this article was AI generated, but also their benchmarks, and perhaps the implementation as well.
  • nallana 5 hours ago
    Why not a shared buffer? Serializing into JSON on this hot path should be entirely avoidable
    • mavdol04 4 hours ago
      I think a shared array just avoids the copy, not the serialization which is the main problem as they showed with serde-wasm-bindgen test
    • devnotes77 4 hours ago
      [dead]
  • envguard 4 hours ago
    The WASM story is interesting from a security angle too. WASM modules inheriting the host's memory model means any parsing bugs that trigger buffer overreads in the Rust code could surface in ways that are harder to audit at the JS boundary. Moving to native TS at least keeps the attack surface in one runtime, even if the theoretical memory safety guarantees go down.
  • dmix 6 hours ago
    That blog post design is very nice. I like the 'scrollspy' sidebar which highlights all visible headings.

    Claude tells me this is https://www.fumadocs.dev/

    • sroussey 6 hours ago
      Interesting, thanks. I need make some good docs soon.
      • dmix 6 hours ago
        Good documentation is always worth the effort. Markdown explaining your products is gold these days with LLMs.
  • kennykartman 4 hours ago
    I dream of the day in which there is no need to pass by JS and Wasm can do all the job by itself. Meanwhile, we are stuck.
  • marcosdumay 4 hours ago
    It would be great if people stopped dismissing the problem that WASM not being a first-class runtime for the web causes.
  • owenpalmer 3 hours ago
    So this is an issue with WASM/JS interop, not with Rust per se?
  • ivanjermakov 5 hours ago
    Good software is usually written on 2nd+ try.
  • caderosche 6 hours ago
    What is the purpose of the Rust WASM parser? Didn't understand that easily from the article. Would love a better explanation.
    • joshuanapoli 5 hours ago
      They use a bespoke language to define LLM-generated UI components. I think that this is supposed to prevent exfiltration if the LLM is prompt-injected. In any case, the parser compiles chunks streaming from the LLM to build a live UI. The WASM parser restarted from the beginning upon each chunk received. Fixing this algorithm to work more incrementally (while porting from Rust to TypeScript) improved performance a lot.
  • nssnsjsjsjs 4 hours ago
    Rewrite bias. Yoy want to also rewrite the Rust one in Rust for comparison.
    • jeremyjh 4 hours ago
      It would be surprising if rewriting in Rust could change the WASM boundary tax that the article identified as the actual problem.
  • measurablefunc 1 hour ago
    I tried a similar experiment recently w/ FFT transform for wav files in the browser and javascript was faster than wasm. It was mostly vibe coded Rust to wasm but FFT is a well-known algorithm so I don't think there were any low hanging performance improvements left to pick.
  • neuropacabra 5 hours ago
    This is very unusual statement :-D
  • szmarczak 5 hours ago
    > Attempted Fix: Skip the JSON Round-Trip > We integrated serde-wasm-bindgen

    So you're reinventing JSON but binary? V8 JSON nowadays is highly optimized [1] and can process gigabytes per second [2], I doubt it is a bottleneck here.

    [1] https://v8.dev/blog/json-stringify [2] https://github.com/simdjson/simdjson

    • kam 4 hours ago
      No, serde-wasm-bindgen implements the serde Serializer interface by calling into JS to directly construct the JS objects on the JS heap without an intermediate serialization/deserialization. You pay the cost of one or more FFI calls for every object though.

      https://docs.rs/serde-wasm-bindgen/

  • slowhadoken 5 hours ago
    Am I mistaken or isn’t TypeScript just Golang under the hood these days?
    • jeremyjh 4 hours ago
      There is too much wrong here to call it a mistake.
    • iainmerrick 5 hours ago
      Hmm, there's an in-progress rewrite of the TypeScript compiler in Go; is that what you mean?

      I don't think that's actually out yet, and more importantly, it doesn't change anything at runtime -- your code still runs in a JS engine (V8, JSC etc).

      • koakuma-chan 2 hours ago
        npm i -D @typescript/native-preview

        You can use it today.

  • derodero24 2 hours ago
    [dead]
  • DaleBiagio 4 hours ago
    [dead]
  • dualblocksgame 4 hours ago
    [dead]
  • patapim 4 hours ago
    [dead]
  • aimarketintel 2 hours ago
    [dead]
  • SCLeo 5 hours ago
    They should rewrite it in rust again to get another 3x performance increase /s
  • ConanRus 3 hours ago
    [dead]