#mistral
5 items
Need advice running multi-agent llm pipeline on Kaggle/Colab with local model constraint (www.reddit.com via reddit) Running on cpu :( (www.reddit.com via reddit) I am in the midst of a POC project at work and am I have is 4 AMD Epyc cores and those are essentially virtualized. Does any one have any tricks?
Looking for people with different hardware to help benchmark local LLM behavioral reliability (www.reddit.com via reddit) I've been working on measuring how LLMs actually behave (not what they know) across different hardware setups. Things like: does the model cave when you push back on a correct answer?
How are you feeding personal context to your local models? (www.reddit.com via reddit) I've been running Mistral/Llama locally through Ollama for a while now and the thing that keeps bugging me is context. The model itself is fine for general stuff but the second I want it to know about my projects, my notes, or files it doe…
LLM for name/gender classification (www.reddit.com via reddit)