8 comments

  • netdur 20 hours ago
    I tried TQ for vector search and my findings is not good, it is not worth it if you cannot use GPU, however I got same quality of search as 32f using 8bit quant

    I wrote ann ext for sqlite, using tq, I do save a lot on space but 32f is still faster despite everything I have tried

    code here https://github.com/netdur/munind/tree/main/src/tq

    • ninja3925 17 hours ago
      So i assumed it would get crushed by OPQ (which requires training)
    • teamchong 18 hours ago
      you’re right that 32f is faster on raw query time, quantization adds extra step. main benefit on download size since gzip won’t help much, which matters most in browser contexts
  • glohbalrob 1 day ago
    Very cool. I added the new multi embedding 2 model to my site the other week from google

    I guess need to dig into this and see if it’s faster and has more use cases! Thanks for publishing your work

  • hhthrowaway1230 1 day ago
    Awesome! Also love the gaussian splat demo, cool use case!
  • refulgentis 20 hours ago
    Sloppiest slop I've seen in a couple weeks:

    - fork of a fork of a quantization technique

    - Only contribution is...compiling JS to WASM by default?

    - suspicious burst of ~nothing comments from new accounts

    - 6 comments 7 hours in, 4 flagged/dead, other 2 also spammy, confused and making category errors at best, at worst, more spam.

    - Demo shows it's worse: 800 ms instead of 2.6 ms for text embedding search

    - "but it saves space" - yes! 1.2 MB in RAM instead of 7.2 MB to turn search into 1s on a MacBook Pro M4 Max, instead of sub-frame duration.

    - It's not even wrong to do this with the output embeddings, there's way more obvious ways to save space that don’t affect retrieval time this much

  • himmelsee2018 23 hours ago
    [flagged]
  • bingbong06 1 day ago
    [flagged]
  • newbrowseruser 1 day ago
    [dead]
  • aritzdf 23 hours ago
    [flagged]