2 comments

  • cosmos0072 8 hours ago
    First stable release appeared on HN one year ago: https://news.ycombinator.com/item?id=43061183 Thanks for all the feedback!

    Today, version 1.0.0 adds structured pipelines: a mechanism to exchange (almost) arbitrary objects via POSIX pipes, and transform them via external programs, shell builtins or Scheme code.

    Example:

      dir /proc | where name -starts k | sort-by modified
    
    possible output:

      ┌───────────┬────┬───────────────┬────────┐
      │   name    │type│     size      │modified│
      ├───────────┼────┼───────────────┼────────┤
      │kcore      │file│140737471590400│09:32:44│
      │kmsg       │file│              0│09:32:49│
      │kallsyms   │file│              0│09:32:50│
      │kpageflags │file│              0│10:42:53│
      │keys       │file│              0│10:42:53│
      │kpagecount │file│              0│10:42:53│
      │key-users  │file│              0│10:42:53│
      │kpagecgroup│file│              0│10:42:53│
      └───────────┴────┴───────────────┴────────┘
    
    Another example:

      ip -j route | select dst dev prefsrc | to json1
    
    possible output:

      [{"dst":"default","dev":"eth0"},
      {"dst":"192.168.0.0/24","dev":"eth0","prefsrc":"192.168.0.2"}]
    
    Internally, objects are serialized before writing them to a pipe - by default as NDJSON, but it can be set manually - and deserialized when reading them from a pipe.

    This allows arbitrary transformations at each pipeline step: filtering, choosing a subset of the fields, sorting with user-specified criteria, etc. And each step can be an executable program, a shell builtin or Scheme code.

    If you know nushell, they will feel familiar as they are inspired by it - the implementation is fully independent, though.

  • ascent817 4 hours ago
    I'm not too familiar, but is this similar to nushell?
    • cosmos0072 3 hours ago
      Yes, they are similar at first glance. As I wrote:

      > If you know nushell, they will feel familiar as they are inspired by it - the implementation is fully independent, though.

      Looking deeper, there are two main differences:

      - Nushell structured pipelines are an internal construct, and only work with nushell builtins. Schemesh uses actual POSIX pipes, which allows to also insert executables in the pipelines.

      - schemesh also allows to insert arbitrary Scheme code in the pipelines. Nushell does too, in a sense: you have to write nushell language though

      • sunshine-o 17 minutes ago
        I am really interested to give it a try since I am a big fan of nushell but also really enjoyed LISP through clojure.

        Nushell is great because it is really "battery included" in the sense that it ships with most of what you will ever need: http client (no need for curl), export/import of most formats (yaml, toml, csv, msgpack, sqlite, xml), etc. That really avoids the library or tool shopping problem. You can copy the 50MB nushell binary on almost anything and your script will work.

        This is why this is really a "modern shell" as it includes what you need in 2026, not 1979.

        This is also why I use nushell rather than python or lua for most scripts.

        I am not familiar with the Scheme ecosystem but see you are implementing a lot within your shell (http client, sqlite support) what is great. But does that mean I can produce a Schemesh binary that will include Chez Scheme and run my script on any Linux or FreeBSD host?

        Anyway I think what you are doing is really great !