So i have been seeing more of those pelican on a bike svg tests and while they work i feel like (and maybe you guys do too) they are getting kinda benchmaxxed so we should switch things up soon and this is my idea generate me a html svg ofβ¦
model
MiniMax-M2.7
huggingface.co/MiniMaxAI/MiniMax-M2.7 ↗
43645 downloads727 likestext-generationtransformers
from the model card
Join Our π¬ WeChat | π§© Discord community. MiniMax Agent | β‘οΈ API | MCP | MiniMax Website π€ Hugging Face | π GitHub | π€οΈ ModelScope | π LICENSE MiniMax-M2.7 is our first model deeply participating in its own evolution. M2.7 is capable of building complex agent harnesses and completing highly elaborate productivity tasks, leveraging Agent Teams, complex Skills, and dynamic tool search. For more details, see our blog post. Model Self-Evolution M2.7 initiates a cycle of model self-evolution: during development, we let the model update its own memory, build dozens of complex skills for RL experiments, and improve its own learning process based on experiment results. An internal version of M2.7 autonomously optimized a programming scaffold over 100+ rounds β analyzing failure trajectories, modifying code, running evaluations, and deciding to keep or revert β achieving a 30% performance improvement. On MLE Bench Lite (22 ML competitions), M2.7 achieved a 66.6% medal rate, second only to Opus-4.6 and GPT-5.4. Professional Software Engineering M2.7 delivers outstanding real-world programming capabilities spanning log analysis, bug troubleshooting, refactoring, code security, and machine learning. Beyond code generation, M2.7 demonstrates strong system-level reasoning β correlating monitoring metrics, conducting trace analysis, verifying root causes in databases, and making SRE-level deciβ¦
discussions
- MiniMax 2.7 7 ongoing since 2026-04-13
recent items
Guys we have to change the pelican test (www.reddit.com via reddit) But why Local LLM? How does this make economic sense vs API? (www.reddit.com via reddit) Hey guys, come fight me: how do you justify local LLMs from a value perspective? It doesn't seem economical?
How does a self correcting loop for AI agents work? (www.reddit.com via reddit) Hey guys, just checked out minimax 2.7, where they used AI to train itself, and ran over a hundred loops, and it improved it's performance by 30%, how does that work, can I also run a script that makes AI store it's memory in a loop on a mβ¦
A 1-bit quant of MiniMax 2.7 that runs from a CD at 1500 tk/s would be nice. (www.reddit.com via reddit) Badda Boom.
Upgrade paths for my 256g ddr4 ram + 4x24g vram system (www.reddit.com via reddit) So I was just about to give up playing with local models, until I realised I can actually run GLM 5.1 at not too horrible speeds, using this quant https://huggingface.co/ubergarm/GLM-5.1-GGUF/tree/main/IQ2_KL in ik llama. Getting around 6.β¦
Optimizing MiniMax 2.7 - Experts vs Layers for best VRAM/RAM utilization (www.reddit.com via reddit) Mac Studio Performance Suggestion For minimax (www.reddit.com via reddit)