Reinforcement learning, explained with a minimum of math and jargon

(understandingai.org)

182 points | by JnBrymn 4 days ago

7 comments

  • Peteragain 1 day ago
    Reinforcement Learning is basically sticks and carrots and the problem is credit assignment. Did I get hit with the stick because I said 5 plus 3 is 8? Or because I wrote my answers in green ink? Or... That used to be what RL was. S&B talk about "modern reinforcement learning" and introduce "Temporal Difference Learning", but imo the book is a bit of a rummage through GOFAI. Is the recent innovation with LLMs to perhaps use feedback to generate prompts? Talking about RL in this context does seem to be an attempt to freshen up interest. "Look! LLMs version 4.0! Now with added Science!"
  • vonnik 1 day ago
  • jxjnskkzxxhx 17 hours ago
    I would encourage everyone to read the Sutton and barto directly. Best technical book I've read past year. Though if you're trying to minimize math, the first edition is significantly simpler.
  • mnkv 1 day ago
    reasonable post with a decent analogy explaining on-policy learning, only major thing I take issue with is

    > Reinforcement learning is a technical subject—there are whole textbooks written about it.

    and then linking to the still wip RLHF book instead of the book on RL: Sutton & Barto.

    • dawnofdusk 1 day ago
      Haha that's crazy I'm so used to reading RL papers that when the blog linked to a textbook about RL I just filled in Sutton & Barto without clicking on the link or thinking any further about the matter.

      I think the other criticism I have is that the historical importance of RLHF to ChatGPT is sort of sidelined, and the author at the beginning pinpoints something like the rise of agents as the beginning of the influence of RL in language modelling. In fact, the first LLM that attained widespread success was ChatGPT, and the secret sauce was RLHF... no need to start the story so late in 2023-2024.

  • b0a04gl 23 hours ago
    rl usually shown as math + rewards + policies. but it's really training on noisy,changing data ,learning from shaky guesses (td bootstrap bias) ,chasing vague rewards.makes it unstable and not friendly for clean theory .hidden issues make rl hard,but that's how it is.
  • jekwoooooe 20 hours ago
    I don’t think it’s useful to explain things that are fundamentally mathematical by leaving out the math and tech. It’s a good article though
    • chrisweekly 19 hours ago
      (caveat: I haven't yet read the article)

      Huh? Your 2nd sentence seems to contradict your 1st. Or is the article somehow "good" without being "useful"?

      • jekwoooooe 18 hours ago
        It was a good read on the concept but I’m left unsatisfied by hand waving all the stuff. Like how, physically, is the reinforcement actually saved? Is it a number in a file? What is the math behind the reward mechanism? What variables are changed and saved? What is the literal deliverable when you serve this to a client?
      • littlestymaar 17 hours ago
        > Huh? Your 2nd sentence seems to contradict your 1st. Or is the article somehow "good" without being "useful"?

        The article isn't what the title say it is, so it's still good despite the title claim being questionable.

  • lsorber 21 hours ago
    For those who want to dive deeper, here’s a 300 LOC implementation of GRPO in pure NumPy: https://github.com/superlinear-ai/microGRPO

    The implementation learns to play Battleship in about 2000 steps, pretty neat!