Has anyone critically examined Michael Levin's sorting algorithm claims?

After hearing Michael Levin on Lex Fridman and Tim Ferriss, I was genuinely blown away by his claim that deterministic sorting algorithms exhibit emergent clustering behavior — that similar algorithms spontaneously group together in chimeric arrays. He frames this as discovering "free compute" and evidence of basal intelligence in minimal systems.

Then I listened to Brett Hall's critique on TokCast, and it clicked.

The key move Levin makes is converting top-down sorting algorithms into agentic ones — each array element acts autonomously based on local rules. Once you do that, you're no longer studying the original algorithm. You're studying emergent behavior in agent-based systems, which is well-trodden ground. John Conway's Game of Life (1970) and Stephen Wolfram's "A New Kind of Science" (2002) established decades ago that simple local rules produce complex emergent patterns.

The clustering result isn't surprising once you recognize the methodological shift. Of course agents following different local strategies will self-segregate — that's what agent-based models do.

What concerns me more is that this framing went unchallenged on two of the biggest podcasts in tech/science. Fridman is an ML researcher. The question "how does making the algorithm agentic change what you're actually studying?" seems like an obvious one to ask.

Has anyone here dug into the paper's methodology? Am I missing something, or is the novelty claim overstated?

5 points | by vladiim 3 hours ago

0 comments