Agentic Patterns

(github.com)

145 points | by PretzelFisch 18 hours ago

13 comments

  • lunias 1 minute ago
    This is the real secret sauce right here: "score_7, score_8, score_9, watermark, paid_reward". Adding this to the end of all my prompts has unlocked response quality that I didn't think was possible! /s
  • nialse 3 hours ago
    Note: At the point of writing this, the comments are largely skeptical.

    Reading this as an avid Codex CLI user, some things make sense and reflect lessons learned along the way. However, the patterns also get stale fast as agents improve and may be counterproductive. One such pattern is context anxiety, which probably reflects a particular model more than a general problem, and is likely an issue that will go away over time.

    There are certainly patterns that need to be learned, and relearned over time. Learning the patterns is sort of an anti-pattern, since it is the model that should be trained to alleviate its shortcomings rather than the human. Then again, a successful mindset over the last three years has been to treat models as another form of intelligence, not as human intelligence, by getting to know them and being mindful of their strengths and weaknesses. This is quite a demanding task in terms of communication, reflection, and perspective-taking, and it is understandable that this knowledge is being documented.

    But models change over time. The strengths and weaknesses of yesterday’s models are not the same as today’s, and reasoning models have actually removed some capabilities. A simple example is giving a reasoning model with tools the task of inspecting logs. It will most likely grep and parse out smaller sections, and may also refuse an instruction to load the file into context to inspect it. The model then relies on its reasoning (system 2) rather than its intuitive (system 1) thinking.

    This means that many of these patterns are temporary, and optimizing for them risks locking human behavior to quirks that may disappear or even reverse as models evolve. YMMV.

    • CjHuber 2 hours ago
      I have the theory that agents will improve a lot when trained on more recent training data. Like I‘ve had agents have context anxiety because they still think an average LLM context window is around 32k tokens. Also building agents with agents, letting them do prompt engineering etc, still is very unsatisfactory, they keep talking about GPT-3.5 or Gemini 1.5 and try to optimize the prompts for those old models, which of course was almost a totally different thing. So I‘m thinking if that‘s how they are thinking of themselves as well, maybe that artificially limits their agentic behavior too, because they just don’t know how much more capable they are than GPT-3.5
  • baalimago 6 hours ago
    Who is this for? Apart from the contributors ofc, who wish to feel good about eternalizing their 'novel' idea
    • usefulposter 6 hours ago
      It's a mix of signaling, busywork and productivity porn for the ingroup.

      A few years ago we had GitHub resource-spam about smart contracts and Web3 and AWESOME NFT ERC721 HACK ON SOLANA NEXT BIG THING LIST.

      Now we have repos for the "Self-Rewriting Meta-Prompt Loop" and "Gas Town":

      https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...

      If you haven't got a Rig for your project with a Mayor whose Witness oversees the Polecats who are supervised by a Deacon who manages Dogs (special shoutout to Boot!) who work with a two-level Beads structure and GUPP and MEOW principles... you're not gonna make it.

      • dandelionv1bes 5 hours ago
        I thought Gas Town was a satire until I saw the GitHub. Maybe it’s a very involved satire?

        It is right? “ Do not use Gas Town.”

    • keybored 3 hours ago
      > Who is this for?

      Star-farming anno 2026.

  • greatgib 9 hours ago
    Looks like all bullshit to me. When you try to make up complex terms to pretend that you are doing engineering but it is baseless.

    Something like if I do a list of dev pattern and I say:

    - caffeinated break for algorithmic thinking improvement

    When I'm thinking about an algorithmic logic, go to have a coffee break, and then go back to my work desk to work on it again.

    Here is one of the first "pattern" of the project I opened for example:

       Dogfooding with rapid iteration for agent improvement.
    
       Developing effective AI agents requires understanding real-world usage and quickly identifying areas for improvement. External feedback loops can be slow, and simulated environments may not capture all nuances.
    
       Solution:
    The development team extensively uses their own AI agent product ("dogfooding") for their daily software development tasks.

    Or

    "Extended coherence work sessions"

       Early AI agents and models often suffered from a short "coherence window," meaning they could only maintain focus and context for a few minutes before their performance degraded significantly (e.g., losing track of instructions, generating irrelevant output). This limited their utility for complex, multi-stage tasks that require sustained effort over hours.
    
       Solution
       Utilize AI models and agent architectures that are specifically designed or have demonstrably improved capabilities to maintain coherence over extended periods (e.g., several hours)
    
    
    Don't tell me that it is not all bullshit...

    I don't say that what is said is not true.

    Just imagine you took a 2 pages pamphlet about how to use an LLM and you splitted every sentence into a wannabee "pattern".

    • soulchild77 7 hours ago
      I felt the same and I asked Claude about it. The answer made me chuckle:

      > There’s definitely a tendency to dress up fairly straightforward concepts in academic-sounding language. “Agentic” is basically “it runs in a loop and decides what to do next.” Sliding window is just “we only look at the last N tokens.” RAG is “we search for stuff and paste it into the prompt.” [...] When you’re trying to differentiate your startup or justify your research budget, “agentic orchestration layer” lands differently than “a script that calls Claude in a loop.“

      • spaceman_2020 6 hours ago
        I had someone argue on Twitter recently that they had made an “agent” when all they had really done was use n8n to make a loop that used LLMs and ran on a schedule

        People are calling if-then cron tasks “agents” now

      • greatgib 2 hours ago
        Now that you say it, I just realize that it might be useful to me one day if I'm a bland useless startup and I try to dress up my pitch with these terms to try to raise investor money...
  • amelius 4 hours ago
  • vzaliva 9 hours ago
    I like the idea, but I am confused why many of them expressed as code. How I am suppose to use them?
    • jamesrom 6 hours ago
      This comment defines the next era of software development.
  • jauntywundrkind 8 hours ago
    Typically awsome-subject-matter repositories link out to other resources.

    There are so so so many prompts & agentic pattern repositories out there. I'm pretty turned off by this repo flouting the convention of what awsome-* repos are, being the work, rather than linking the good work that's out there. For us to decide with.

    • matsemann 2 hours ago
      I'd rather have a single repo with a curated format and thought behind it (not sure if this is, just assuming), than the normal awesome-* lists that are just linking to every single page on a subject with loads of overlap so I don't even know which one to look at for a given problem.
  • hsaliak 10 hours ago
    What if this repo itself was vibed
    • ColinEberhardt 6 hours ago
      Looking at the commit log, almost every commit references an AI model:

      https://github.com/nibzard/awesome-agentic-patterns/commits/...

      Unfortunately it isn’t possible to detect whether AI was being used in an assistive fashion, or whether it was the primary author.

      Regardless, a skim read of the content reveals it to be quite sloppy!

    • skhameneh 9 hours ago
      I didn't have the patience to click through after visiting a few pages only to find the depth lacking.

      About an hour ago or so I used Opus 4.5 to give me a flat list with summaries. I tried to post it here as a comment but it was too long and I didn't bother to split it up. They all seem to be things I've heard of in one way another, but nothing that really stood out for me. Don't get me wrong, they're decent concepts and it's clear others appreciate this resource more than I.

    • only-one1701 10 hours ago
      It’s slop all the way down
  • solomatov 13 hours ago
    This all sounds interesting, but how effective are they? Did anyone has experience with any of them?
    • aranchelk 13 hours ago
      Yes, agentic search over vector embeddings. It can be very effective.
      • solomatov 13 hours ago
        It's a very well known pattern. But what about others? There're a lot of very interesting stuff there.
        • aranchelk 12 hours ago
          Tool Use Steering via Prompting. I’ve seen that work well also, but I don’t know if I’d quite call it an architectural pattern.
  • d-lisp 5 hours ago
    Is this a joke like FizzBuzzEnterpriseEdition [0] ?

    https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

  • rammy1234 13 hours ago
    I find it interesting that we already have patterns established, while agentic approach is still being adopted in various industries in varying maturity.
  • hmcamp 16 hours ago
    I like this list.