event

Mistral

28 items · started 2023-11-07 · ongoing (last activity 2026-04-28)

  1. I was building my own project and spending way too much on API credits. Not because I needed some massive scale.

  2. Workflows for work that runs the business Workflows is now in public preview. Today, we're releasing Workflows in public preview.

  3. Interestingly enough, Mistral Small is written as Mistral-Small-4-119B-2603. Their medium model will have 128B paramters.

  4. Model(s) or Tool upgrade/New Tool? Source Tweet : https://xcancel.com/mistralvibe/status/2049147645894021147#m

  5. More and more developer tools are adopting the llms.txt standard to build AI-friendly versions of their docs. The problem is that it's very hard to search across them.

  6. Hey everyone — built Ombre, an open source AI infrastructure layer that works with any AI model. Eight agents run automatically: security, caching, memory, hallucination detection, tamper-proof audit trail.

  7. When Arthur Mensch, the cofounder and CEO of Mistral, France’s leading AI company, takes the stage at the AI Action Summit in the center of New Delhi, India, in February, he draws only a small crowd. Nearly everyone would rather listen to…

  8. Musk's xAI eyed Europe's AI giant Mistral in a bid to challenge OpenAI and Anthropic, according to a report. Elon Musk’s company xAI reportedly held discussions in recent weeks with the French artificial intelligence company Mistral about…

  9. Every quant got update https://huggingface.co/unsloth/Mistral-Small-4-119B-2603-GGUF

  10. Looking for a nice lightweight LLM that is good at translating English and French. Other languages would be awesome too but I will settle for English and French.

  11. Suite aux propositions de lois hors de Française sur la présomption d'utilisation par l'IA notamment sur les contenus culturel et artistiques (musique, films,...), il y a plusieurs position sur qui doit payer des droits aux artistes : - la…

  12. TL;DR I try to keep most traffic on very cheap models (Nano / GLM‑Flash / Qwen / MiniMax) and only escalate to stronger models for genuinely complex or reasoning‑heavy queries. I’m still actively testing this and tweaking it several times…

  13. I am in the midst of a POC project at work and am I have is 4 AMD Epyc cores and those are essentially virtualized. Does any one have any tricks?

  14. Hey everyone, I'm a final year engineering student building a 3-agent LLM platform (Researcher, Writer, Validator) for my end-of-studies project. My setup: RTX 4050, 6GB VRAM 16GB RAM Running Mistral 7B via Ollama locally The problem: My s…

  15. I've been working on measuring how LLMs actually behave (not what they know) across different hardware setups. Things like: does the model cave when you push back on a correct answer?

  16. I've been running Mistral/Llama locally through Ollama for a while now and the thing that keeps bugging me is context. The model itself is fine for general stuff but the second I want it to know about my projects, my notes, or files it doe…

  17. Hey there, I have a task where I have a huge list with names (e.g. John Smith).

← all threads