5 comments

  • upghost 5 minutes ago
    "MIT Researchers have yet to discover self-aware sentient dataset. In unrelated news, Sam Altman promises with another 7 trillion dollars he can turn lead into gold. 'For real this time,' claims Altman."
  • southernplaces7 10 hours ago
    >Generative AI doesn't have a coherent understanding of the world

    The very headline is based on a completely faulty assumption, that AI has any capacity for understanding at all, which it doesn't. That would require self-directed reasoning and self-awareness, both of which it lacks based on any available evidence. (though there's no shortage of irrational defenders here who somehow leap to say that there's no difference between consciousness in humans and the pattern matching of AI technology today, because they happened to have a "conversation" with ChatGPT and etc).

    • _heimdall 7 minutes ago
      > The very headline is based on a completely faulty assumption, that AI has any capacity for understanding at all, which it doesn't.

      And this right here is why its so frustrating to me that the term "AI" is used for LLMs. They are impressive for certain tasks, but they are nothing close to artificial intelligence and were never designed to be.

  • jmole 17 hours ago
    None of us has a coherent view of the world, the map is not the territory..
    • partomniscient 14 hours ago
      Yeah I was going to call out MIT for pointing out the obvious, but there's enough noise/misunderstanding out there, that this kind of article can lead to the 'I get it' moment for someone.
    • HarryHirsch 9 hours ago
      That is emphatically not true - animals and small children that can't speak yet know about object persistence. If something has come from over there and is now here then it's no longer there.

      LLM's do not have that concept, and you'll notice very quickly if you ask chemistry questions. Atoms appear twice, and the LLM just won't notice. The approach has to be changed for AI to be useful in the physical sciences.

    • krapp 10 hours ago
      A map, if it is useful (which the subjective human experience of reality tends to be, for most people most of the time,) is by definition a coherent view of the territory. Coherent doesn't imply perfect objective accuracy.
  • h_tbob 17 hours ago
    To be honest… it’s amazing it can have any understanding given the only “senses” it has are the 1024 inputs of d_model!

    Imagine trying to understand the world if you were simply given books and books in a language you had never read… and you didn’t even know how to read or even talk!

    So it’s pretty incredible it’s got this far!

    • pjerem 10 hours ago
      I mean, I’m amazed by LLMs.

      But what you describe is basically done by any human on Earth : you are born not knowing how to read or how to talk and after years of learning, reading all the books may give you some understanding of the world.

      Contrary to LLM though, human brains don’t have a virtually infinite energy supply, cannot be parallelized and have to dedicate their already scarce energy to do a lot of other things than reading books including moving in a 3D world, living in a society, feeding themselves, doing their own hardware maintenance (sleep …), pay attention not to die every single day etc etc.

      So, for sure, LLMs _algorithms_ are really incredible, but they are also useful only if you throw a lot of hardware and energy into them. I’d be curious to see how long you’d need to train (not use) a useful LLM with only 20W of power (which is more or less the power we estimate the brain is using to function).

      We can still be impressed by the results, but not really by the speed. And when you have to learn the entire written corpus in some weeks/months, speed is pretty useful.

      • corimaith 10 hours ago
        Pretty sure the human brain is far more parallelized than a regular CPU or GPU. Our image recognition for example probably dosen't take shortcuts like convulution because of the cost of processing each "pixel", we directly do it all with those millions of eye neurons. Well, to be fair there is alot of post-processing and "magic" involved in getting the final image.
  • mediumsmart 16 hours ago
    AI generates a world that is language mapped by word pointing to another word like we did?