The website seems to have some bugs on mobile, seen on Chrome 147.0.7727.137
- Cannot horizontally scroll the code snippets on homepage when it overflows. The scroll bars appear but swiping the snippet does nothing.
- Footer links are unresponsive (loon, GitHub, MIT Licence links)
- In the changelog page, scrolling makes the hamburger menu hide release dates behind it
- Hamburger close chevron looks misaligned (not sure if this was a deliberate choice)
I like the ubiquitous type inference. It reminds me a bit of ELSA for Emacs Lisp: https://github.com/emacs-elsa/Elsa. In particular, type aware macros have been on my wishlist forever: there's no good reason I shouldn't be able to write, e.g. an elisp or CL/SBCL compiler-macro that specializes an operation based on its inferred type. In normal lisps, it's hard to get even the declared types.
That said, I wish that part of Loon were less coupled to the allocation model though. What made you opt for mandatory manual memory management in an otherwise high-level language? And effects?
There are two things common in language design that, honestly, strike me as unnecessary:
1. manual allocation and lifetime stacking, and
2. algebraic effects.
On 1: I think we often conflate the benefits of Rust-style mutability-xor-aliased reference discipline with the benefits of using literal malloc and free. You can achieve the former without necessitating the latter, and I think it leads to a nicer language experience.
It's not just true that GC "comes with latency spikes, higher memory usage, and unpredictable pauses" in any meaningful way with modern implementations of the concept. If anything, it leads to more consistent latency (no synchronous Drop of huge trees at unpredictable times) and better memory use (because good GCs use compressed pointers and compaction).
On 2: I get non-algebraic effects for delimited continuations. But lately I've seen people using non-flow-magical effects for everything. If you need to talk to a database, pick a database interface and pass an object implementing the interface to the code that needs it. Effects do basically the same thing, but implicitly.
I think some comments are missing the upside of it being precisely Rust, without any new semantics. If you want lisp that compiles to machine code, Common Lisp can get reasonably efficient. The purpose of bringing Rust into it is to surface Rust-specific semantics -- which many people quite like!
If you already have the ability to express the grammar productions in Rust that allow for optionally-specified types (e.g. variable declaration), then you have the ability to express lifetimes and the turbofish (which is just a curious way to call a generic function with a specific type parameter). The only weird thing would be that Lisp uses the apostrophe character for something very different than Rust, but you could just pick any other way to denote lifetimes.
Type F must be a function that's generic over any possible lifetime 'a, with a single argument that's a reference with lifetime 'a to a tuple of two numbers, and returns a reference with the same lifetime 'a to an 8-bit number.
The full code is usually something like:
fn foo<F>(callback: F) where for<'a> F: ...
Which is a generic function foo that takes the argument of type F, where F must be...
So if I wanted to actually use this and I write some rust-but-lisp code and there's a compile error, will it show me a nice error message with an arrow pointing to where the error happened in my lisp code?
Can I use the amazing `rust-analyzer` LSP to get cool IDE features?
I suspect the answer is no, but these might be good further prompts to use.
It seems like this is more like writing Rust in an s-expression syntax instead of having a proper lisp dialect that compiles to Rust, which is cool I guess but not very interesting.
It's quite weird-looking for someone who's done any amount of lisp programming.
Yeah, it sort of reminds me of the microcode assembly of a few of the lisp machines, that, while in s-expressions were also clearly not lisp themselves. But could be an interesting target for some lisp macros.
A let that defines variables that have a lifetime beyond the scope of the expression? Yeah, that's really unusual. And it's not even the oddest looking thing from the first example block of code.
Unfortunately, given the clear LLM basis of this project, s-expressions aren't a great choice. I've found coding agents struggle really hard with s-expression parentheses matching.
Much better to give them something more M-expr styled, I think a grammar that is LL(1) is probably helpful in that regard.
Basically the more you can piggyback on the training data depth for algol-style and pythonic languages the better.
That has definitely not been my experience as of late. I have produced multiple, largeish Clojure projects with AI that have been perfectly formatted and functional. Perhaps you were using an older or possibly smaller model? I am admittedly using Claude with higher end models and mid to high effort but it has been working great for months for me at this point.
Nope, but to be fair when you're working on your own novel S-exprs you don't have LSPs to guide the coding agent. I imagine that it works a lot better in the context of a known and understood language environment like Clojure, CL, scheme, etc. The other option would be to write an LSP in a non-S-expr language to ensure that no turn can end with mismatched parens, for example.
Greenspun's tenth rule was formulated in a time before things like first-class functions were commonplace in industrial languages. Rust supports not just functional programming idioms but outright Scheme-style macros, it's out of scope for Greenspun's.
How do you change the syntax to eliminate reverse compatibility? I guess you could change the names of most key functions between releases. But to be compatible with rust you would need to make breaking changes every release.
Yes, but you could do the same by transforming Rust's ASTs. The only downside is that your input format is different from the format you are transforming. But the upside is that readability is much improved, which matters because code is typically read far more often than it is written.
For everyone who is shaming on the project for "not implementing enough," then you can definitely help me with it.
For everyone who is shaming on the project for being "LLM slop," sure but that's the reason why something like this can exist in the first place. The point isn't to be a finished, production-ready product. The point is to be an interesting work, and just a sly bit silly
>S-expression syntax parsers are not hard to write.
I'm not sure I quite understand the point of your comment.
Are you implying that LLMs should be used for very hard to write code? I feel like the best use of LLMs is to automate the easy stuff so that I can focus on the hard to write stuff.
Scheme already has hygenic macros, I don't get why you'd vibecode a worse (less battle tested, llm-generated) replacement. I'm not sure why this hit the front-page, to be honest, because it doesn't seem noteworthy or interesting (Anyone and their mother can vibecode something like this in eight hours)
- Cannot horizontally scroll the code snippets on homepage when it overflows. The scroll bars appear but swiping the snippet does nothing. - Footer links are unresponsive (loon, GitHub, MIT Licence links) - In the changelog page, scrolling makes the hamburger menu hide release dates behind it - Hamburger close chevron looks misaligned (not sure if this was a deliberate choice)
That said, I wish that part of Loon were less coupled to the allocation model though. What made you opt for mandatory manual memory management in an otherwise high-level language? And effects?
There are two things common in language design that, honestly, strike me as unnecessary:
1. manual allocation and lifetime stacking, and
2. algebraic effects.
On 1: I think we often conflate the benefits of Rust-style mutability-xor-aliased reference discipline with the benefits of using literal malloc and free. You can achieve the former without necessitating the latter, and I think it leads to a nicer language experience.
It's not just true that GC "comes with latency spikes, higher memory usage, and unpredictable pauses" in any meaningful way with modern implementations of the concept. If anything, it leads to more consistent latency (no synchronous Drop of huge trees at unpredictable times) and better memory use (because good GCs use compressed pointers and compaction).
On 2: I get non-algebraic effects for delimited continuations. But lately I've seen people using non-flow-magical effects for everything. If you need to talk to a database, pick a database interface and pass an object implementing the interface to the code that needs it. Effects do basically the same thing, but implicitly.
That was basically my intent with this project, but I took the laziest way to get there lol
> Everything Rust has … expressed as s-expressions. No semantic gap.
The full code is usually something like:
fn foo<F>(callback: F) where for<'a> F: ...
Which is a generic function foo that takes the argument of type F, where F must be...
Can I use the amazing `rust-analyzer` LSP to get cool IDE features?
I suspect the answer is no, but these might be good further prompts to use.
It's quite weird-looking for someone who's done any amount of lisp programming.
The first paragraph says literally that.
Much better to give them something more M-expr styled, I think a grammar that is LL(1) is probably helpful in that regard.
Basically the more you can piggyback on the training data depth for algol-style and pythonic languages the better.
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
Maybe we should one day include Golang or Rust to it
https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule
It reads as No X no Y just slop to me every time.
some pre-processor that "compiles into rust" from less awful syntax?
It's sort of, but not quite, like "El jefe"
"L rut piss"
For everyone who is shaming on the project for being "LLM slop," sure but that's the reason why something like this can exist in the first place. The point isn't to be a finished, production-ready product. The point is to be an interesting work, and just a sly bit silly
Can we please write our own READMEs before posting to HN?
I'm not sure I quite understand the point of your comment.
Are you implying that LLMs should be used for very hard to write code? I feel like the best use of LLMs is to automate the easy stuff so that I can focus on the hard to write stuff.
I don't even feel bad saying this because clearly OP is just the front for Claude here.