1. For those who've taken the exam or gone through the prep material: Is the material worth understanding at a conceptual level, or does it feel like it'll age out quickly? As agents get better at handling architecture decisions automatically…

  2. Currently I try to turn off any MCP I'm not using, Using Sonnet for implementation and Opus only for planning. Starting new conversations when possible.

  3. LLM-powered soccer simulation where every player on the field is an AI agent running a decide() callback — generated, sandboxed, and evolved by large language models. Four clean layers.

  4. Let n be a positive integer. Prove that sum_{k=1}^n gcd(k,n) = sum_{d|n} d * phi(n/d) where phi is Euler's totient function.

  5. Trying to collect the best claude.md files code. If you have one that works really well for you, please copy it into the comments and let me know what kinds of coding you normally do (language, surface, kind, etc)

  6. Most multi-agent systems fail the same way: agents drift apart across handoffs. By turn 3 they are working in different realities.

  7. model roundup

    Qwen 3.6
    367 items

    Qwen3.6-35B-A3B, a 35 billion parameter sparse MoE model with an active parameter count of 3 billion, was released on April 16, 2026, as open-source software under the Apache 2.0 license by Alibaba Qwen. It offers advanced functionality across various AI applications and outperformed competitors in drawing tests.

    event

    Gpt 4
    25 items

    Recent developments in AI automation include a sales team entirely run by bots achieving $28k MRR, and new tools like Arc Gate blocking prompt injection before it reaches GPT-4. Meanwhile, users are exploring workflows to reduce cross-checking time and improve insights from large language models.

  8. 0% decrease same accuracyKeep what matters. Rose 1 trims noisy context before your model call and keeps the answer intact.

  9. I'm building MCP and CLI at my company, directly exposed to users. I had done the MCP first, investing time and thoughts on not making it just a wrapper of our APIs.

  10. 12 min read 1 hour ago Press enter or click to view image in full size On February 5, 2026, Nicholas Carlini from Anthropic published a piece about an experiment that runs significantly ahead of what most of us are doing with LLM agents to…

  11. Armorer The secure local control plane for installing, configuring, and monitoring AI agents. Website · Human docs · Issues Use Your Agent To Install It One command: curl -fsSL https://armorerlabs.com/install | sh Fully automated: curl -fs…

  12. One endpoint for all your MCP servers. Endara aggregates your local and cloud MCP servers behind a single endpoint.

  13. I'll admit I'm new to Claude. I'm using Claude Code in VSCode.

  14. model roundup

    Qwen 3.5
    139 items

    Qwen3.5-9B is a post-trained model with 9 billion parameters that integrates multimodal learning and efficient hybrid architecture for enhanced performance. Community highlights include speculative decoding on Apple Silicon boosting Qwen3.5-9B's throughput by 4.1x, and the model outperforming others in coding tasks while addressing overthinking issues through tool usage.

    model roundup

    Qwen 2.5
    6 items

    Qwen2.5-7B-Instruct is a large language model with 7 billion parameters that excels in coding and mathematics, generating long texts, and handling structured data. Community members are exploring its use in developing an autonomous security agent for Kali Linux, highlighting potential applications in cybersecurity.

  15. I’m looking for recommendations, resources, apps, workflows, AI tools, or even just discussions from people who struggle with ADHD/ADD, anxiety, depression, disorganization, impulse spending, unfinished projects, and life overload. I’m a v…

  16. How Agents Manage Other Agents: Four Subagents Patterns in 2026 Last year I wrote about the rise of subagents and why isolating tasks into focused agents with their own context, tools, and instructions improves reliability. That post cover…

  17. On May 5, 2026, a Miami-based startup called Subquadratic came out of stealth with $29 million in seed funding and a single, very loud claim: it has built the first frontier LLM that does not rely on quadratic attention. Its model, SubQ, s…

  18. this morning I asked Opus to write me a Chatbot session in a format that I can use as input into a test script (The purpose of which is not important for this, but I'm testing embedding and need something that I can re-run often and compar…

  19. Hi I believe LLM are really cool in generating DSL code. If one provides well structured and clear prompt.

  20. We built https://www.faradaystack.com/ Faraday Stack allows you to build agents that automate FDE work end to end starting from customer success to custom requirements to deployment

  21. 181 items

    Anthropic's new update, Claude Mythos, has garnered attention from top AI security researchers like Carlini, who found numerous bugs. The update is noted for its speed and effectiveness, with Anthropic identifying a significant security flaw in FFmpeg and quickly submitting patches.

  22. In the Cursor TEAM version, the cost per person is $40. Why can only $20 of the quota be used, and what is the remaining money for?

  23. could not extract summary

  24. One thing I’ve noticed after using Claude for some time now is that it is especially good when my notes or ideas are still not fully ready. A lot of AI tools are decent at generating polished output, but Claude feels good at taking messy p…

  25. I wonder if anyone can explain why this happens. I tell Claude not to use em-dashes, it replaces them with "--".

  26. I built my own trading strategy over the last few years, (no you won’t find it on youtube) and i have been thinking recently about automating some parts of it and maybe just getting claude to confirm with me before order execution, But im…

  27. If you browse most agent tutorials, the examples are almost always the same, like read the weather and say something funny, scrape a page and summarise it or draft a tweet. They are fine for learning, but in practice we all know they are b…

  28. La gestione della conoscenza per i Large Language Models sta evolvendo verso la densità informativa estrema. Il progetto analizza l'integrazione tra la compressione gerarchica dei dati e le architetture wiki moderne.