#minimax
28 items
Ryan Lee from MiniMax posts article on the license stating it's mostly for API providers that did a poor job serving M2.1/M2.5 and may update the license for regular users! (www.reddit.com via reddit) Minimax M2.5 vs. GLM-5 vs. Kimi k2.5: How do they compare to Codex and Claude for coding? (www.reddit.com via reddit) Guys we have to change the pelican test (www.reddit.com via reddit) So i have been seeing more of those pelican on a bike svg tests and while they work i feel like (and maybe you guys do too) they are getting kinda benchmaxxed so we should switch things up soon and this is my idea generate me a html svg of…
MiniMax m2.7 under 64gb for Macs - 91% MMLU (www.reddit.com via reddit) Update LICENSE · MiniMaxAI/MiniMax-M2.7 at edf8030 (huggingface.co via reddit) RyanLee's(MiniMax) recent tweets for same. I just updated our license.
MiniMax released MMX-CLI: one CLI for text, image, video, speech, music, vision, and web search — no MCP server needed. Works natively in Claude Code, Cursor, OpenClaw. (www.reddit.com via reddit) MiniMax M2.7 GGUF Investigation, Fixes, Benchmarks (www.reddit.com via reddit) Hey r/LocalLLaMA, we did an investigation into MiniMax-M2.7 GGUF causing NaNs on perplexity. Our findings show the issue affects 21%-38% of all GGUFs on Hugging Face (not just ours).
My first impressions of Minimax M2.7 (Q5_K_M) vs Qwen 3.5 27b (Q8_0) (www.reddit.com via reddit) I'm not sure if the AesSedai's Q5_K_M version of Minimax M2.7 is too much lobotomized or if the model itself is kind of weak. I did a simple experiment with both models running with the recommended parameters.
2x Asus Ascent GX10 - MiniMax M2.7 AWQ - cloud providers are dead to me (www.reddit.com via reddit) Hello, I've been on a quest to get something "close enough" of Opus 4.5 running locally, for agentic coding, as SWE with 15 years of experience. I tried with one spark (yeah I'm calling my Asus Ascent GX10 sparks - they're the same), with…
A 1-bit quant of MiniMax 2.7 that runs from a CD at 1500 tk/s would be nice. (www.reddit.com via reddit) Badda Boom.
Single question llm comparison (www.reddit.com via reddit) Updated Minimax m2.7 still doesn't allow coding a product. But before the next riot starts, Ryan Lee has already confirmed that they are still working on the license, and sale of products built by m2.7 is permitted. (www.reddit.com via reddit) You've been blocked by network security. To continue, log in to your Reddit account or use your developer token If you think you've been blocked by mistake, file a ticket below and we'll look into it.
I got better results when I made each AI tool do one job (www.reddit.com via reddit) Use Claude, ChatGPT, or MiniMax Subscriptions in Cursor (open-vsx.org via hn) Ungate A Cursor-first extension for using Claude, ChatGPT, and MiniMax subscriptions in Cursor instead of paying for API tokens. How it works Ungate lets you use Claude, ChatGPT, and MiniMax in Cursor through account subscriptions instead…
Minimax M2.7 on Q3_K_S or Smaller Model with greater precision? (www.reddit.com via reddit) I currently am looking for models to fit into my single DGX Spark for use. I have an RTX Pro 6000 and also a 5090 as well that I'm considering using in combination if the DGX Spark is too slow, but the intent here is to play around with Op…
Ollama Cloud - Pro (www.reddit.com via reddit) Hi. I've been looking at ollama cloud's Pro offering ($20), which says "Run 3 cloud models at a time".
Ask HN: Former grok-code-fast-1 users, what coding model are you using now? (news.ycombinator.com via hn) How does a self correcting loop for AI agents work? (www.reddit.com via reddit) Hey guys, just checked out minimax 2.7, where they used AI to train itself, and ran over a hundred loops, and it improved it's performance by 30%, how does that work, can I also run a script that makes AI store it's memory in a loop on a m…
Model API Performance (news.ycombinator.com via hn) What Am I Doing Wrong? Models Won't Listen, At All (GLM 5.1, MiniMax M2.7, Kimi K2.5) (www.reddit.com via reddit) What am I doing wrong here? I can't get models to follow my instructions, pretty much at all.
Need suggestions for local AI Machine (www.reddit.com via reddit) I’ve been running various AI harnesses like OpenClaw, ForgeCode, ClaudeCode, etc. Most of these are running via OpenRouter or Minimax (credits/subscription model).
But why Local LLM? How does this make economic sense vs API? (www.reddit.com via reddit) Hey guys, come fight me: how do you justify local LLMs from a value perspective? It doesn't seem economical?
I made a simple proxy to let Claude use MiniMax models as subagents (www.reddit.com via reddit) I made this due to the usage problem. Enjoy and tell me what you guys think!
Optimizing MiniMax 2.7 - Experts vs Layers for best VRAM/RAM utilization (www.reddit.com via reddit) Best setup for MiniMax-M2.7 (230B) | 3x RTX 5090 | Threadripper 9975 | 512GB RAM (www.reddit.com via reddit) Mac Studio Performance Suggestion For minimax (www.reddit.com via reddit) Why most open-source models can't answer this question while most closed-source models can answer most of the time? (www.reddit.com via reddit) Stop donating your salary to OpenAI: Why Minimax M2.5 is making GPT-5.2 Thinking look like an overpriced dinosaur for coding plans. (www.reddit.com via reddit)