> The task we study is linebreaking in fixed-width text.
I wonder why they focused specifically on a task that is already solved algorithmically. The paper does not seem to address this, and the references do not include any mentions of non-LLM approaches to the line-breaking problem.
There's also a lot of analogising of this to visual/spatial reasoning, even to the point of talking about "visual illusions", when its clearly a counting task as the title says.
It makes it tedious to figure out what they actually did (which sounds interesting) when it's couched in such terms and presented in such an LLMified style.
it's not strictly a counting task, the LLM sees same-sized-tokens, but a token corresponds to a variable number of characters (which is not directly fed into the model)
like the difference between Unicode code-points and UTF-8 bytes, you can't just count UTF-8 bytes to know how many code-points you have
Utter disrespect for using the term "biology" relating to LLM. No one would call the analysis of a mechanical engine "car biology".
It's an artificial system, call it system analysis.
The analogy stems from the notion that neural nets are "grown" rather than "engineered". Chris Olah has an old, but good post with some specific examples: https://colah.github.io/notes/bio-analogies/
I wonder why they focused specifically on a task that is already solved algorithmically. The paper does not seem to address this, and the references do not include any mentions of non-LLM approaches to the line-breaking problem.
The point is to see how LLMs implement algorithms internally, starting with this simple easily understood algorithm.
It makes it tedious to figure out what they actually did (which sounds interesting) when it's couched in such terms and presented in such an LLMified style.
like the difference between Unicode code-points and UTF-8 bytes, you can't just count UTF-8 bytes to know how many code-points you have