#mmlu
3 items
MiniMax m2.7 under 64gb for Macs - 91% MMLU (www.reddit.com via reddit) Show HN: Flint – A 30B model fine-tuned for less repetition (springboards.ai via hn) As frontier LLMs have very little output diversity even for open ended queries. We built Flint to see if we could reverse this.
GGUF Quants Arena for MMLU (24GB VRAM + 128GB RAM) (www.reddit.com via reddit) Dataset: MMLU subset (DEV+TEST) Llamacpp setting: 3 params only ctx 8192 , seed 42 , fa on Let me know whatelse do you want to see. Thanks.