3 comments

  • snovv_crash 2 hours ago
    Curious how this would deal with things like Kahan Summation, which corrects floating point errors that theoretically wouldn't exist if you had infinite precision representations.
  • owlbite 3 hours ago
    It will be interesting to see if this solves any issues that aren't already addressed by the likes of matlab / SciPy / Julia. Reading the paper it sounds a lot like "SciPy but with MLIR"?
    • geremiiah 3 hours ago
      It's more like OpenXLA or the PyTorch compiler, that codegens Kokkos C++ kernels from MLIR defined input programs, which for example can be outputted from PyTorch. Kokkos is common in scientific computing workloads, so outputting readable kernels is a feature in itself. Beyond that there's a lot of engineering that can go into such a compiler to specifically optimize sparse workloads.

      What I am missing is a comparison with JAX/OpenXLA and PyTorch with torch.compile().

      Also instead of rebuilding a whole compiler framework they could have contributed to Torch Inductor or OpenXLA, unless they had some design decisions that were incompatible. But it's quite common for academic projects to try to reinvent the wheel. It's also not necessarily a bad thing. It's a pedagogical exercise.

  • trevyn 2 hours ago
    Isn't this where Mojo is going?
    • uoaei 1 hour ago
      Speaking of, where is Mojo?